VER: Scaling On-Policy RL Leads to the Emergence of Navigation in Embodied Rearrangement

10/11/2022
by   Erik Wijmans, et al.
23

We present Variable Experience Rollout (VER), a technique for efficiently scaling batched on-policy reinforcement learning in heterogenous environments (where different environments take vastly different times to generate rollouts) to many GPUs residing on, potentially, many machines. VER combines the strengths of and blurs the line between synchronous and asynchronous on-policy RL methods (SyncOnRL and AsyncOnRL, respectively). VER learns from on-policy experience (like SyncOnRL) and has no synchronization points (like AsyncOnRL). VER leads to significant and consistent speed-ups across a broad range of embodied navigation and mobile manipulation tasks in photorealistic 3D simulation environments. Specifically, for PointGoal navigation and ObjectGoal navigation in Habitat 1.0, VER is 60-100 the current state of art distributed SyncOnRL, with similar sample efficiency. For mobile manipulation tasks (open fridge/cabinet, pick/place objects) in Habitat 2.0 VER is 150 speedup) on 8 GPUs than DD-PPO. Compared to SampleFactory (the current state-of-the-art AsyncOnRL), VER matches its speed on 1 GPU, and is 70 (1.7x speedup) on 8 GPUs with better sample efficiency. We leverage these speed-ups to train chained skills for GeometricGoal rearrangement tasks in the Home Assistant Benchmark (HAB). We find a surprising emergence of navigation in skills that do not ostensible require any navigation. Specifically, the Pick skill involves a robot picking an object from a table. During training the robot was always spawned close to the table and never needed to navigate. However, we find that if base movement is part of the action space, the robot learns to navigate then pick an object in new environments with 50 out-of-distribution generalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/12/2023

Learning Hierarchical Interactive Multi-Object Search for Mobile Manipulation

Existing object-search approaches enable robots to search through free p...
research
06/13/2023

Galactic: Scaling End-to-End Reinforcement Learning for Rearrangement at 100k Steps-Per-Second

We present Galactic, a large-scale simulation and reinforcement-learning...
research
11/01/2019

Decentralized Distributed PPO: Solving PointGoal Navigation

We present Decentralized Distributed Proximal Policy Optimization (DD-PP...
research
06/28/2021

Habitat 2.0: Training Home Assistants to Rearrange their Habitat

We introduce Habitat 2.0 (H2.0), a simulation platform for training virt...
research
08/18/2020

ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for Mobile Manipulation

Many Reinforcement Learning (RL) approaches use joint control signals (p...
research
04/01/2023

Adaptive Skill Coordination for Robotic Mobile Manipulation

We present Adaptive Skill Coordination (ASC) - an approach for accomplis...
research
05/31/2019

Reinforcement Learning Experience Reuse with Policy Residual Representation

Experience reuse is key to sample-efficient reinforcement learning. One ...

Please sign up or login with your details

Forgot password? Click here to reset