Escaping Stochastic Traps with Aleatoric Mapping Agents

Exploration in environments with sparse rewards is difficult for artificial agents. Curiosity driven learning – using feed-forward prediction errors as intrinsic rewards – has achieved some success in these scenarios, but fails when faced with action-dependent noise sources. We present aleatoric mapping agents (AMAs), a neuroscience inspired solution modeled on the cholinergic system of the mammalian brain. AMAs aim to explicitly ascertain which dynamics of the environment are unpredictable, regardless of whether those dynamics are induced by the actions of the agent. This is achieved by generating separate forward predictions for the mean and variance of future states and reducing intrinsic rewards for those transitions with high aleatoric variance. We show AMAs are able to effectively circumvent action-dependent stochastic traps that immobilise conventional curiosity driven agents. The code for all experiments presented in this paper is open sourced: http://github.com/self-supervisor/Escaping-Stochastic-Traps-With-Aleatoric-Mapping-Agents.

READ FULL TEXT

page 5

page 6

research
05/28/2019

Coordinated Exploration via Intrinsic Rewards for Multi-Agent Reinforcement Learning

Sparse rewards are one of the most important challenges in reinforcement...
research
08/13/2018

Large-Scale Study of Curiosity-Driven Learning

Reinforcement learning algorithms rely on carefully engineering environm...
research
05/20/2021

Don't Do What Doesn't Matter: Intrinsic Motivation with Action Usefulness

Sparse rewards are double-edged training signals in reinforcement learni...
research
06/05/2018

Boredom-driven curious learning by Homeo-Heterostatic Value Gradients

This paper presents the Homeo-Heterostatic Value Gradients (HHVG) algori...
research
02/24/2022

Collaborative Training of Heterogeneous Reinforcement Learning Agents in Environments with Sparse Rewards: What and When to Share?

In the early stages of human life, babies develop their skills by explor...
research
11/18/2022

Curiosity in hindsight

Consider the exploration in sparse-reward or reward-free environments, s...

Please sign up or login with your details

Forgot password? Click here to reset