Contingency-Aware Exploration in Reinforcement Learning

11/05/2018
by   Jongwook Choi, et al.
8

This paper investigates whether learning contingency-awareness and controllable aspects of an environment can lead to better exploration in reinforcement learning. To investigate this question, we consider an instantiation of this hypothesis evaluated on the Arcade Learning Element (ALE). In this study, we develop an attentive dynamics model (ADM) that discovers controllable elements of the observations, which are often associated with the location of the character in Atari games. The ADM is trained in a self-supervised fashion to predict the actions taken by the agent. The learned contingency information is used as a part of the state representation for exploration purposes. We demonstrate that combining A2C with count-based exploration using our representation achieves impressive results on a set of notoriously challenging Atari games due to sparse rewards. For example, we report a state-of-the-art score of >6600 points on Montezuma's Revenge without using expert demonstrations, explicit high-level information (e.g., RAM states), or supervised data. Our experiments confirm that indeed contingency-awareness is an extremely powerful concept for tackling exploration problems in reinforcement learning and opens up interesting research questions for further investigations.

READ FULL TEXT

page 3

page 14

research
07/24/2019

Efficient Exploration with Self-Imitation Learning via Trajectory-Conditioned Policy

This paper proposes a method for learning a trajectory-conditioned polic...
research
04/15/2021

Self-Supervised Exploration via Latent Bayesian Surprise

Training with Reinforcement Learning requires a reward function that is ...
research
07/17/2022

Guaranteed Discovery of Controllable Latent States with Multi-Step Inverse Models

A person walking along a city street who tries to model all aspects of t...
research
05/29/2018

Playing hard exploration games by watching YouTube

Deep reinforcement learning methods traditionally struggle with tasks wh...
research
06/01/2021

Did I do that? Blame as a means to identify controlled effects in reinforcement learning

Modeling controllable aspects of the environment enable better prioritiz...
research
10/20/2021

Dynamic Bottleneck for Robust Self-Supervised Exploration

Exploration methods based on pseudo-count of transitions or curiosity of...
research
04/06/2020

Weakly-Supervised Reinforcement Learning for Controllable Behavior

Reinforcement learning (RL) is a powerful framework for learning to take...

Please sign up or login with your details

Forgot password? Click here to reset