Curiosity in hindsight

11/18/2022
by   Daniel Jarrett, et al.
0

Consider the exploration in sparse-reward or reward-free environments, such as Montezuma's Revenge. The curiosity-driven paradigm dictates an intuitive technique: At each step, the agent is rewarded for how much the realized outcome differs from their predicted outcome. However, using predictive error as intrinsic motivation is prone to fail in stochastic environments, as the agent may become hopelessly drawn to high-entropy areas of the state-action space, such as a noisy TV. Therefore it is important to distinguish between aspects of world dynamics that are inherently predictable and aspects that are inherently unpredictable: The former should constitute a source of intrinsic reward, whereas the latter should not. In this work, we study a natural solution derived from structural causal models of the world: Our key idea is to learn representations of the future that capture precisely the unpredictable aspects of each outcome – not any more, not any less – which we use as additional input for predictions, such that intrinsic rewards do vanish in the limit. First, we propose incorporating such hindsight representations into the agent's model to disentangle "noise" from "novelty", yielding Curiosity in Hindsight: a simple and scalable generalization of curiosity that is robust to all types of stochasticity. Second, we implement this framework as a drop-in modification of any prediction-based exploration bonus, and instantiate it for the recently introduced BYOL-Explore algorithm as a prime example, resulting in the noise-robust "BYOL-Hindsight". Third, we illustrate its behavior under various stochasticities in a grid world, and find improvements over BYOL-Explore in hard-exploration Atari games with sticky actions. Importantly, we show SOTA results in exploring Montezuma with sticky actions, while preserving performance in the non-sticky setting.

READ FULL TEXT
research
08/09/2023

Intrinsic Motivation via Surprise Memory

We present a new computing model for intrinsic rewards in reinforcement ...
research
06/16/2022

BYOL-Explore: Exploration by Bootstrapped Prediction

We present BYOL-Explore, a conceptually simple yet general approach for ...
research
08/30/2023

Cyclophobic Reinforcement Learning

In environments with sparse rewards, finding a good inductive bias for e...
research
05/22/2023

Developmental Curiosity and Social Interaction in Virtual Agents

Infants explore their complex physical and social environment in an orga...
research
05/15/2017

Curiosity-driven Exploration by Self-supervised Prediction

In many real-world scenarios, rewards extrinsic to the agent are extreme...
research
09/12/2022

Self-supervised Sequential Information Bottleneck for Robust Exploration in Deep Reinforcement Learning

Effective exploration is critical for reinforcement learning agents in e...
research
02/08/2021

Escaping Stochastic Traps with Aleatoric Mapping Agents

Exploration in environments with sparse rewards is difficult for artific...

Please sign up or login with your details

Forgot password? Click here to reset