DayDreamer: World Models for Physical Robot Learning

06/28/2022
by   Philipp Wu, et al.
10

To solve tasks in complex environments, robots need to learn from experience. Deep reinforcement learning is a common approach to robot learning but requires a large amount of trial and error to learn, limiting its deployment in the physical world. As a consequence, many advances in robot learning rely on simulators. On the other hand, learning inside of simulators fails to capture the complexity of the real world, is prone to simulator inaccuracies, and the resulting behaviors do not adapt to changes in the world. The Dreamer algorithm has recently shown great promise for learning from small amounts of interaction by planning within a learned world model, outperforming pure reinforcement learning in video games. Learning a world model to predict the outcomes of potential actions enables planning in imagination, reducing the amount of trial and error needed in the real environment. However, it is unknown whether Dreamer can facilitate faster learning on physical robots. In this paper, we apply Dreamer to 4 robots to learn online and directly in the real world, without simulators. Dreamer trains a quadruped robot to roll off its back, stand up, and walk from scratch and without resets in only 1 hour. We then push the robot and find that Dreamer adapts within 10 minutes to withstand perturbations or quickly roll over and stand back up. On two different robotic arms, Dreamer learns to pick and place multiple objects directly from camera images and sparse rewards, approaching human performance. On a wheeled robot, Dreamer learns to navigate to a goal position purely from camera images, automatically resolving ambiguity about the robot orientation. Using the same hyperparameters across all experiments, we find that Dreamer is capable of online learning in the real world, establishing a strong baseline. We release our infrastructure for future applications of world models to robot learning.

READ FULL TEXT

page 1

page 5

page 6

page 7

page 8

page 13

research
12/26/2018

Learning to Walk via Deep Reinforcement Learning

Deep reinforcement learning suggests the promise of fully automated lear...
research
10/26/2020

High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards

Robots that can learn in the physical world will be important to en-able...
research
08/15/2019

Sample-efficient Deep Reinforcement Learning with Imaginary Rollouts for Human-Robot Interaction

Deep reinforcement learning has proven to be a great success in allowing...
research
09/20/2017

Transfer learning from synthetic to real images using variational autoencoders for robotic applications

Robotic learning in simulation environments provides a faster, more scal...
research
04/16/2019

End-to-End Robotic Reinforcement Learning without Reward Engineering

The combination of deep neural network models and reinforcement learning...
research
10/18/2022

Online Damage Recovery for Physical Robots with Hierarchical Quality-Diversity

In real-world environments, robots need to be resilient to damages and r...
research
09/22/2017

OptLayer - Practical Constrained Optimization for Deep Reinforcement Learning in the Real World

While deep reinforcement learning techniques have recently produced cons...

Please sign up or login with your details

Forgot password? Click here to reset