RLBench: The Robot Learning Benchmark Learning Environment

09/26/2019
by   Stephen James, et al.
21

We present a challenging new benchmark and learning-environment for robot learning: RLBench. The benchmark features 100 completely unique, hand-designed tasks ranging in difficulty, from simple target reaching and door opening, to longer multi-stage tasks, such as opening an oven and placing a tray in it. We provide an array of both proprioceptive observations and visual observations, which include rgb, depth, and segmentation masks from an over-the-shoulder stereo camera and an eye-in-hand monocular camera. Uniquely, each task comes with an infinite supply of demos through the use of motion planners operating on a series of waypoints given during task creation time; enabling an exciting flurry of demonstration-based learning. RLBench has been designed with scalability in mind; new tasks, along with their motion-planned demos, can be easily created and then verified by a series of tools, allowing users to submit their own tasks to the RLBench task repository. This large-scale benchmark aims to accelerate progress in a number of vision-guided manipulation research areas, including: reinforcement learning, imitation learning, multi-task learning, geometric computer vision, and in particular, few-shot learning. With the benchmark's breadth of tasks and demonstrations, we propose the first large-scale few-shot challenge in robotics. We hope that the scale and diversity of RLBench offers unparalleled research opportunities in the robot learning community and beyond.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 6

research
01/10/2023

ORBIT: A Unified Simulation Framework for Interactive Robot Learning Environments

We present ORBIT, a unified and modular framework for robot learning pow...
research
10/16/2018

Multiple Interactions Made Easy (MIME): Large Scale Demonstrations Data for Imitation

In recent years, we have seen an emergence of data-driven approaches in ...
research
02/25/2020

Scalable Multi-Task Imitation Learning with Autonomous Improvement

While robot learning has demonstrated promising results for enabling rob...
research
10/08/2018

Task-Embedded Control Networks for Few-Shot Imitation Learning

Much like humans, robots should have the ability to leverage knowledge f...
research
06/26/2019

PyRep: Bringing V-REP to Deep Robot Learning

PyRep is a toolkit for robot learning research, built on top of the virt...
research
11/07/2019

Benchmark for Skill Learning from Demonstration: Impact of User Experience, Task Complexity, and Start Configuration on Performance

In this work, we contribute a large-scale study benchmarking the perform...
research
05/28/2022

BulletArm: An Open-Source Robotic Manipulation Benchmark and Learning Framework

We present BulletArm, a novel benchmark and learning-environment for rob...

Please sign up or login with your details

Forgot password? Click here to reset