Exploring through Random Curiosity with General Value Functions

11/18/2022
by   Aditya Ramesh, et al.
0

Efficient exploration in reinforcement learning is a challenging problem commonly addressed through intrinsic rewards. Recent prominent approaches are based on state novelty or variants of artificial curiosity. However, directly applying them to partially observable environments can be ineffective and lead to premature dissipation of intrinsic rewards. Here we propose random curiosity with general value functions (RC-GVF), a novel intrinsic reward function that draws upon connections between these distinct approaches. Instead of using only the current observation's novelty or a curiosity bonus for failing to predict precise environment dynamics, RC-GVF derives intrinsic rewards through predicting temporally extended general value functions. We demonstrate that this improves exploration in a hard-exploration diabolical lock problem. Furthermore, RC-GVF significantly outperforms previous methods in the absence of ground-truth episodic counts in the partially observable MiniGrid environments. Panoramic observations on MiniGrid further boost RC-GVF's performance such that it is competitive to baselines exploiting privileged information in form of episodic counts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/21/2023

DEIR: Efficient and Robust Exploration through Discriminative-Model-Based Episodic Intrinsic Rewards

Exploration is a fundamental aspect of reinforcement learning (RL), and ...
research
09/19/2022

Rewarding Episodic Visitation Discrepancy for Exploration in Reinforcement Learning

Exploration is critical for deep reinforcement learning in complex envir...
research
02/17/2022

Improving Intrinsic Exploration with Language Abstractions

Reinforcement learning (RL) agents are particularly hard to train when r...
research
05/15/2023

MIMEx: Intrinsic Rewards from Masked Input Modeling

Exploring in environments with high-dimensional observations is hard. On...
research
05/24/2019

Exploration via Flow-Based Intrinsic Rewards

Exploration bonuses derived from the novelty of observations in an envir...
research
05/01/2023

Representations and Exploration for Deep Reinforcement Learning using Singular Value Decomposition

Representation learning and exploration are among the key challenges for...
research
09/12/2022

Self-supervised Sequential Information Bottleneck for Robust Exploration in Deep Reinforcement Learning

Effective exploration is critical for reinforcement learning agents in e...

Please sign up or login with your details

Forgot password? Click here to reset