Efficient Exploration through Intrinsic Motivation Learning for Unsupervised Subgoal Discovery in Model-Free Hierarchical Reinforcement Learning

11/18/2019
by   Jacob Rafati, et al.
0

Efficient exploration for automatic subgoal discovery is a challenging problem in Hierarchical Reinforcement Learning (HRL). In this paper, we show that intrinsic motivation learning increases the efficiency of exploration, leading to successful subgoal discovery. We introduce a model-free subgoal discovery method based on unsupervised learning over a limited memory of agent's experiences during intrinsic motivation. Additionally, we offer a unified approach to learning representations in model-free HRL.

READ FULL TEXT
research
10/23/2018

Learning Representations in Model-Free Hierarchical Reinforcement Learning

Common approaches to Reinforcement Learning (RL) are seriously challenge...
research
07/14/2020

Efficient Online Estimation of Empowerment for Reinforcement Learning

Training artificial agents to acquire desired skills through model-free ...
research
05/22/2019

The Journey is the Reward: Unsupervised Learning of Influential Trajectories

Unsupervised exploration and representation learning become increasingly...
research
05/22/2019

COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and Curiosity-Driven Exploration

Data efficiency and robustness to task-irrelevant perturbations are long...
research
12/12/2018

Multiple Model-Free Knockoffs

Model-free knockoffs is a recently proposed technique for identifying co...
research
02/26/2020

Optimistic Exploration even with a Pessimistic Initialisation

Optimistic initialisation is an effective strategy for efficient explora...
research
09/23/2021

Accessibility-Based Clustering for Efficient Learning of Robot Fall Recovery

For the model-free deep reinforcement learning of quadruped fall recover...

Please sign up or login with your details

Forgot password? Click here to reset