iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks

08/06/2021
by   Chengshu Li, et al.
18

Recent research in embodied AI has been boosted by the use of simulation environments to develop and train robot learning approaches. However, the use of simulation has skewed the attention to tasks that only require what robotics simulators can simulate: motion and physical contact. We present iGibson 2.0, an open-source simulation environment that supports the simulation of a more diverse set of household tasks through three key innovations. First, iGibson 2.0 supports object states, including temperature, wetness level, cleanliness level, and toggled and sliced states, necessary to cover a wider range of tasks. Second, iGibson 2.0 implements a set of predicate logic functions that map the simulator states to logic states like Cooked or Soaked. Additionally, given a logic state, iGibson 2.0 can sample valid physical states that satisfy it. This functionality can generate potentially infinite instances of tasks with minimal effort from the users. The sampling mechanism allows our scenes to be more densely populated with small objects in semantically meaningful locations. Third, iGibson 2.0 includes a virtual reality (VR) interface to immerse humans in its scenes to collect demonstrations. As a result, we can collect demonstrations from humans on these new types of tasks, and use them for imitation learning. We evaluate the new capabilities of iGibson 2.0 to enable robot learning of novel tasks, in the hope of demonstrating the potential of this new simulator to support new research in embodied AI. iGibson 2.0 and its new dataset will be publicly available at http://svl.stanford.edu/igibson/.

READ FULL TEXT

page 2

page 4

page 5

page 7

page 8

page 18

research
08/06/2021

BEHAVIOR: Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments

We introduce BEHAVIOR, a benchmark for embodied AI with 100 activities i...
research
10/16/2018

Multiple Interactions Made Easy (MIME): Large Scale Demonstrations Data for Imitation

In recent years, we have seen an emergence of data-driven approaches in ...
research
11/04/2019

Learning One-Shot Imitation from Humans without Humans

Humans can naturally learn to execute a new task by seeing it performed ...
research
04/02/2019

VRGym: A Virtual Testbed for Physical and Interactive AI

We propose VRGym, a virtual reality testbed for realistic human-robot in...
research
03/23/2021

Learning 6DoF Grasping Using Reward-Consistent Demonstration

As the number of the robot's degrees of freedom increases, the implement...
research
12/05/2020

iGibson, a Simulation Environment for Interactive Tasks in Large Realistic Scenes

We present iGibson, a novel simulation environment to develop robotic so...
research
01/01/2023

Human-in-the-loop Embodied Intelligence with Interactive Simulation Environment for Surgical Robot Learning

Surgical robot automation has attracted increasing research interest ove...

Please sign up or login with your details

Forgot password? Click here to reset