Bootstrap Latent-Predictive Representations for Multitask Reinforcement Learning

by   Daniel Guo, et al.

Learning a good representation is an essential component for deep reinforcement learning (RL). Representation learning is especially important in multitask and partially observable settings where building a representation of the unknown environment is crucial to solve the tasks. Here we introduce Prediction of Bootstrap Latents (PBL), a simple and flexible self-supervised representation learning algorithm for multitask deep RL. PBL builds on multistep predictive representations of future observations, and focuses on capturing structured information about environment dynamics. Specifically, PBL trains its representation by predicting latent embeddings of future observations. These latent embeddings are themselves trained to be predictive of the aforementioned representations. These predictions form a bootstrapping effect, allowing the agent to learn more about the key aspects of the environment dynamics. In addition, by defining prediction tasks completely in latent space, PBL provides the flexibility of using multimodal observations involving pixel images, language instructions, rewards and more. We show in our experiments that PBL delivers across-the-board improved performance over state of the art deep RL agents in the DMLab-30 and Atari-57 multitask setting.



There are no comments yet.


page 7


Mask-based Latent Reconstruction for Reinforcement Learning

For deep reinforcement learning (RL) from pixels, learning effective sta...

DeepMDP: Learning Continuous Latent Space Models for Representation Learning

Many reinforcement learning (RL) tasks provide the agent with high-dimen...

Data-Efficient Reinforcement Learning with Momentum Predictive Representations

While deep reinforcement learning excels at solving tasks where large am...

Multimodal Multitask Representation Learning for Pathology Biobank Metadata Prediction

Metadata are general characteristics of the data in a well-curated and c...

Dynamics-aware Embeddings

In this paper we consider self-supervised representation learning to imp...

Latent World Models For Intrinsically Motivated Exploration

In this work we consider partially observable environments with sparse r...

Deep Reinforcement and InfoMax Learning

Our work is based on the hypothesis that a model-free agent whose repres...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.