DeepAI AI Chat
Log In Sign Up

APS: Active Pretraining with Successor Features

08/31/2021
by   Hao Liu, et al.
0

We introduce a new unsupervised pretraining objective for reinforcement learning. During the unsupervised reward-free pretraining phase, the agent maximizes mutual information between tasks and states induced by the policy. Our key contribution is a novel lower bound of this intractable quantity. We show that by reinterpreting and combining variational successor features <cit.> with nonparametric entropy maximization <cit.>, the intractable mutual information can be efficiently optimized. The proposed method Active Pretraining with Successor Feature (APS) explores the environment via nonparametric entropy maximization, and the explored data can be efficiently leveraged to learn behavior by variational successor features. APS addresses the limitations of existing mutual information maximization based and entropy maximization based unsupervised RL, and combines the best of both worlds. When evaluated on the Atari 100k data-efficiency benchmark, our approach significantly outperforms previous methods combining unsupervised pretraining with task-specific finetuning.

READ FULL TEXT
02/01/2022

CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery

We introduce Contrastive Intrinsic Control (CIC), an algorithm for unsup...
10/14/2022

Mutual Information Regularized Offline Reinforcement Learning

Offline reinforcement learning (RL) aims at learning an effective policy...
10/06/2021

The Information Geometry of Unsupervised Reinforcement Learning

How can a reinforcement learning (RL) agent prepare to solve downstream ...
11/10/2018

Formal Limitations on the Measurement of Mutual Information

Motivate by applications to unsupervised learning, we consider the probl...
04/08/2020

Learning Discrete Structured Representations by Adversarially Maximizing Mutual Information

We propose learning discrete structured representations from unlabeled d...
06/09/2021

Pretraining Representations for Data-Efficient Reinforcement Learning

Data efficiency is a key challenge for deep reinforcement learning. We a...
01/30/2023

Quantifying and maximizing the information flux in recurrent neural networks

Free-running Recurrent Neural Networks (RNNs), especially probabilistic ...