End-to-End Robotic Reinforcement Learning without Reward Engineering

by   Avi Singh, et al.

The combination of deep neural network models and reinforcement learning algorithms can make it possible to learn policies for robotic behaviors that directly read in raw sensory inputs, such as camera images, effectively subsuming both estimation and control into one model. However, real-world applications of reinforcement learning must specify the goal of the task by means of a manually programmed reward function, which in practice requires either designing the very same perception pipeline that end-to-end reinforcement learning promises to avoid, or else instrumenting the environment with additional sensors to determine if the task has been performed successfully. In this paper, we propose an approach for removing the need for manual engineering of reward specifications by enabling a robot to learn from a modest number of examples of successful outcomes, followed by actively solicited queries, where the robot shows the user a state and asks for a label to determine whether that state represents successful completion of the task. While requesting labels for every single state would amount to asking the user to manually provide the reward signal, our method requires labels for only a tiny fraction of the states seen during training, making it an efficient and practical approach for learning skills without manually engineered rewards. We evaluate our method on real-world robotic manipulation tasks where the observations consist of images viewed by the robot's camera. In our experiments, our method effectively learns to arrange objects, place books, and drape cloth, directly from images and without any manually specified reward functions, and with only 1-4 hours of interaction with the real world.


page 1

page 7

page 8

page 9

page 13

page 14


Dynamical Distance Learning for Unsupervised and Semi-Supervised Skill Discovery

Reinforcement learning requires manual specification of a reward functio...

Few-Shot Goal Inference for Visuomotor Learning and Planning

Reinforcement learning and planning methods require an objective or rewa...

Reinforcement Learning without Ground-Truth State

To perform robot manipulation tasks, a low dimension state of the enviro...

Learning to Manipulate Object Collections Using Grounded State Representations

We propose a method for sim-to-real robot learning which exploits simula...

Mungojerrie: Reinforcement Learning of Linear-Time Objectives

Reinforcement learning synthesizes controllers without prior knowledge o...

Robot Navigation using Reinforcement Learning and Slow Feature Analysis

The application of reinforcement learning algorithms onto real life prob...

DayDreamer: World Models for Physical Robot Learning

To solve tasks in complex environments, robots need to learn from experi...

Please sign up or login with your details

Forgot password? Click here to reset