Dynamic Bottleneck for Robust Self-Supervised Exploration

10/20/2021
by   Chenjia Bai, et al.
9

Exploration methods based on pseudo-count of transitions or curiosity of dynamics have achieved promising results in solving reinforcement learning with sparse rewards. However, such methods are usually sensitive to environmental dynamics-irrelevant information, e.g., white-noise. To handle such dynamics-irrelevant information, we propose a Dynamic Bottleneck (DB) model, which attains a dynamics-relevant representation based on the information-bottleneck principle. Based on the DB model, we further propose DB-bonus, which encourages the agent to explore state-action pairs with high information gain. We establish theoretical connections between the proposed DB-bonus, the upper confidence bound (UCB) for linear case, and the visiting count for tabular case. We evaluate the proposed method on Atari suits with dynamics-irrelevant noises. Our experiments show that exploration with DB bonus outperforms several state-of-the-art exploration methods in noisy environments.

READ FULL TEXT

page 9

page 28

page 29

page 30

page 31

research
09/12/2022

Self-supervised Sequential Information Bottleneck for Robust Exploration in Deep Reinforcement Learning

Effective exploration is critical for reinforcement learning agents in e...
research
10/17/2020

Variational Dynamic for Self-Supervised Exploration in Deep Reinforcement Learning

Efficient exploration remains a challenging problem in reinforcement lea...
research
03/23/2021

Drop-Bottleneck: Learning Discrete Compressed Representation for Noise-Robust Exploration

We propose a novel information bottleneck (IB) method named Drop-Bottlen...
research
06/03/2022

Understanding deep learning via decision boundary

This paper discovers that the neural network with lower decision boundar...
research
11/05/2018

Contingency-Aware Exploration in Reinforcement Learning

This paper investigates whether learning contingency-awareness and contr...
research
08/29/2018

Approximate Exploration through State Abstraction

Although exploration in reinforcement learning is well understood from a...
research
05/13/2021

Principled Exploration via Optimistic Bootstrapping and Backward Induction

One principled approach for provably efficient exploration is incorporat...

Please sign up or login with your details

Forgot password? Click here to reset