Shaping Belief States with Generative Environment Models for RL

by   Karol Gregor, et al.

When agents interact with a complex environment, they must form and maintain beliefs about the relevant aspects of that environment. We propose a way to efficiently train expressive generative models in complex environments. We show that a predictive algorithm with an expressive generative model can form stable belief-states in visually rich and dynamic 3D environments. More precisely, we show that the learned representation captures the layout of the environment as well as the position and orientation of the agent. Our experiments show that the model substantially improves data-efficiency on a number of reinforcement learning (RL) tasks compared with strong model-free baseline agents. We find that predicting multiple steps into the future (overshooting), in combination with an expressive generative model, is critical for stable representations to emerge. In practice, using expressive generative models in RL is computationally expensive and we propose a scheme to reduce this computational burden, allowing us to build agents that are competitive with model-free baselines.


page 5

page 7

page 8

page 18

page 19

page 21

page 22


Learning and Querying Fast Generative Models for Reinforcement Learning

A key challenge in model-based reinforcement learning (RL) is to synthes...

Temporal Difference Variational Auto-Encoder

One motivation for learning generative models of environments is to use ...

Dyna Planning using a Feature Based Generative Model

Dyna-style reinforcement learning is a powerful approach for problems wh...

Neural Recursive Belief States in Multi-Agent Reinforcement Learning

In multi-agent reinforcement learning, the problem of learning to act is...

Efficient RL via Disentangled Environment and Agent Representations

Agents that are aware of the separation between themselves and their env...

Learning to Simulate Dynamic Environments with GameGAN

Simulation is a crucial component of any robotic system. In order to sim...

Visual Reaction: Learning to Play Catch with Your Drone

In this paper we address the problem of visual reaction: the task of int...

Please sign up or login with your details

Forgot password? Click here to reset