MIMEx: Intrinsic Rewards from Masked Input Modeling

05/15/2023
by   Toru Lin, et al.
0

Exploring in environments with high-dimensional observations is hard. One promising approach for exploration is to use intrinsic rewards, which often boils down to estimating "novelty" of states, transitions, or trajectories with deep networks. Prior works have shown that conditional prediction objectives such as masked autoencoding can be seen as stochastic estimation of pseudo-likelihood. We show how this perspective naturally leads to a unified view on existing intrinsic reward approaches: they are special cases of conditional prediction, where the estimation of novelty can be seen as pseudo-likelihood estimation with different mask distributions. From this view, we propose a general framework for deriving intrinsic rewards – Masked Input Modeling for Exploration (MIMEx) – where the mask distribution can be flexibly tuned to control the difficulty of the underlying conditional prediction task. We demonstrate that MIMEx can achieve superior results when compared against competitive baselines on a suite of challenging sparse-reward visuomotor tasks.

READ FULL TEXT
research
09/28/2020

Novelty Search in representational space for sample efficient exploration

We present a new approach for efficient exploration which leverages a lo...
research
04/21/2023

DEIR: Efficient and Robust Exploration through Discriminative-Model-Based Episodic Intrinsic Rewards

Exploration is a fundamental aspect of reinforcement learning (RL), and ...
research
11/18/2022

Exploring through Random Curiosity with General Value Functions

Efficient exploration in reinforcement learning is a challenging problem...
research
08/09/2023

Intrinsic Motivation via Surprise Memory

We present a new computing model for intrinsic rewards in reinforcement ...
research
02/17/2022

Improving Intrinsic Exploration with Language Abstractions

Reinforcement learning (RL) agents are particularly hard to train when r...
research
05/24/2019

Exploration via Flow-Based Intrinsic Rewards

Exploration bonuses derived from the novelty of observations in an envir...
research
01/24/2019

Never Forget: Balancing Exploration and Exploitation via Learning Optical Flow

Exploration bonus derived from the novelty of the states in an environme...

Please sign up or login with your details

Forgot password? Click here to reset