Dropout's Dream Land: Generalization from Learned Simulators to Reality

09/17/2021
by   Zac Wellmer, et al.
0

A World Model is a generative model used to simulate an environment. World Models have proven capable of learning spatial and temporal representations of Reinforcement Learning environments. In some cases, a World Model offers an agent the opportunity to learn entirely inside of its own dream environment. In this work we explore improving the generalization capabilities from dream environments to real environments (Dream2Real). We present a general approach to improve a controller's ability to transfer from a neural network dream environment to reality at little additional cost. These improvements are gained by drawing on inspiration from Domain Randomization, where the basic idea is to randomize as much of a simulator as possible without fundamentally changing the task at hand. Generally, Domain Randomization assumes access to a pre-built simulator with configurable parameters but oftentimes this is not available. By training the World Model using dropout, the dream environment is capable of creating a nearly infinite number of different dream environments. Previous use cases of dropout either do not use dropout at inference time or averages the predictions generated by multiple sampled masks (Monte-Carlo Dropout). Dropout's Dream Land leverages each unique mask to create a diverse set of dream environments. Our experimental results show that Dropout's Dream Land is an effective technique to bridge the reality gap between dream environments and reality. Furthermore, we additionally perform an extensive set of ablation studies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2021

DROID: Minimizing the Reality Gap using Single-Shot Human Demonstration

Reinforcement learning (RL) has demonstrated great success in the past s...
research
10/27/2022

Adapting Neural Models with Sequential Monte Carlo Dropout

The ability to adapt to changing environments and settings is essential ...
research
03/27/2018

World Models

We explore building generative neural network models of popular reinforc...
research
02/23/2022

Consistent Dropout for Policy Gradient Reinforcement Learning

Dropout has long been a staple of supervised learning, but is rarely use...
research
07/08/2020

Self-Supervised Policy Adaptation during Deployment

In most real world scenarios, a policy trained by reinforcement learning...
research
09/04/2018

Recurrent World Models Facilitate Policy Evolution

A generative recurrent neural network is quickly trained in an unsupervi...
research
05/02/2022

Triangular Dropout: Variable Network Width without Retraining

One of the most fundamental design choices in neural networks is layer w...

Please sign up or login with your details

Forgot password? Click here to reset