Habitat-Web: Learning Embodied Object-Search Strategies from Human Demonstrations at Scale

04/07/2022
by   Ram Ramrakhya, et al.
2

We present a large-scale study of imitating human demonstrations on tasks that require a virtual robot to search for objects in new environments – (1) ObjectGoal Navigation (e.g. 'find go to a chair') and (2) Pick Place (e.g. 'find mug, pick mug, find counter, place mug on counter'). First, we develop a virtual teleoperation data-collection infrastructure – connecting Habitat simulator running in a web browser to Amazon Mechanical Turk, allowing remote users to teleoperate virtual robots, safely and at scale. We collect 80k demonstrations for ObjectNav and 12k demonstrations for Pick Place, which is an order of magnitude larger than existing human demonstration datasets in simulation or on real robots. Second, we attempt to answer the question – how does large-scale imitation learning (IL) (which hasn't been hitherto possible) compare to reinforcement learning (RL) (which is the status quo)? On ObjectNav, we find that IL (with no bells or whistles) using 70k human demonstrations outperforms RL using 240k agent-gathered trajectories. The IL-trained agent demonstrates efficient object-search behavior – it peeks into rooms, checks corners for small objects, turns in place to get a panoramic view – none of these are exhibited as prominently by the RL agent, and to induce these behaviors via RL would require tedious reward engineering. Finally, accuracy vs. training data size plots show promising scaling behavior, suggesting that simply collecting more demonstrations is likely to advance the state of art further. On Pick Place, the comparison is starker – IL agents achieve ∼18 with new object-receptacle locations when trained with 9.5k human demonstrations, while RL agents fail to get beyond 0 provides compelling evidence for investing in large-scale imitation learning. Project page: https://ram81.github.io/projects/habitat-web.

READ FULL TEXT

page 2

page 3

page 8

page 14

page 15

page 16

page 17

page 19

research
01/18/2023

PIRLNav: Pretraining with Imitation and RL Finetuning for ObjectNav

We study ObjectGoal Navigation - where a virtual robot situated in a new...
research
03/28/2022

Socially Compliant Navigation Dataset (SCAND): A Large-Scale Dataset of Demonstrations for Social Navigation

Social navigation is the capability of an autonomous agent, such as a ro...
research
06/23/2023

AR2-D2:Training a Robot Without a Robot

Diligently gathered human demonstrations serve as the unsung heroes empo...
research
05/31/2017

The Atari Grand Challenge Dataset

Recent progress in Reinforcement Learning (RL), fueled by its combinatio...
research
12/21/2020

myGym: Modular Toolkit for Visuomotor Robotic Tasks

We introduce a novel virtual robotic toolkit myGym, developed for reinfo...
research
04/09/2021

Counter-Strike Deathmatch with Large-Scale Behavioural Cloning

This paper describes an AI agent that plays the popular first-person-sho...
research
08/15/2023

Leveraging Symmetries in Pick and Place

Robotic pick and place tasks are symmetric under translations and rotati...

Please sign up or login with your details

Forgot password? Click here to reset