Don't Do What Doesn't Matter: Intrinsic Motivation with Action Usefulness

05/20/2021
by   Mathieu Seurin, et al.
10

Sparse rewards are double-edged training signals in reinforcement learning: easy to design but hard to optimize. Intrinsic motivation guidances have thus been developed toward alleviating the resulting exploration problem. They usually incentivize agents to look for new states through novelty signals. Yet, such methods encourage exhaustive exploration of the state space rather than focusing on the environment's salient interaction opportunities. We propose a new exploration method, called Don't Do What Doesn't Matter (DoWhaM), shifting the emphasis from state novelty to state with relevant actions. While most actions consistently change the state when used, e.g. moving the agent, some actions are only effective in specific states, e.g., opening a door, grabbing an object. DoWhaM detects and rewards actions that seldom affect the environment. We evaluate DoWhaM on the procedurally-generated environment MiniGrid, against state-of-the-art methods and show that DoWhaM greatly reduces sample complexity.

READ FULL TEXT

page 5

page 7

page 8

page 10

page 14

page 16

research
09/21/2021

Long-Term Exploration in Persistent MDPs

Exploration is an essential part of reinforcement learning, which restri...
research
05/23/2022

An Evaluation Study of Intrinsic Motivation Techniques applied to Reinforcement Learning over Hard Exploration Environments

In the last few years, the research activity around reinforcement learni...
research
08/25/2023

Go Beyond Imagination: Maximizing Episodic Reachability with World Models

Efficient exploration is a challenging topic in reinforcement learning, ...
research
06/05/2018

Boredom-driven curious learning by Homeo-Heterostatic Value Gradients

This paper presents the Homeo-Heterostatic Value Gradients (HHVG) algori...
research
02/08/2021

Escaping Stochastic Traps with Aleatoric Mapping Agents

Exploration in environments with sparse rewards is difficult for artific...
research
07/01/2019

MULEX: Disentangling Exploitation from Exploration in Deep RL

An agent learning through interactions should balance its action selecti...
research
10/02/2018

EMI: Exploration with Mutual Information Maximizing State and Action Embeddings

Policy optimization struggles when the reward feedback signal is very sp...

Please sign up or login with your details

Forgot password? Click here to reset