Curiosity Driven Exploration of Learned Disentangled Goal Spaces

07/04/2018
by   Adrien Laversanne-Finot, et al.
0

Intrinsically motivated goal exploration processes enable agents to autonomously sample goals to explore efficiently complex environments with high-dimensional continuous actions. They have been applied successfully to real world robots to discover repertoires of policies producing a wide diversity of effects. Often these algorithms relied on engineered goal spaces but it was recently shown that one can use deep representation learning algorithms to learn an adequate goal space in simple environments. However, in the case of more complex environments containing multiple objects or distractors, an efficient exploration requires that the structure of the goal space reflects the one of the environment. In this paper we show that using a disentangled goal space leads to better exploration performances than an entangled goal space. We further show that when the representation is disentangled, one can leverage it by sampling goals that maximize learning progress in a modular manner. Finally, we show that the measure of learning progress, used to drive curiosity-driven exploration, can be used simultaneously to discover abstract independently controllable features of the environment.

READ FULL TEXT
research
03/02/2018

Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration

Intrinsically motivated goal exploration algorithms enable machines to d...
research
08/19/2019

Intrinsically Motivated Exploration for Automated Discovery of Patterns in Morphogenetic Systems

Exploration is a cornerstone both for machine learning algorithms and fo...
research
06/10/2019

Autonomous Goal Exploration using Learned Goal Spaces for Visuomotor Skill Acquisition in Robots

The automatic and efficient discovery of skills, without supervision, fo...
research
05/13/2020

Progressive growing of self-organized hierarchical representations for exploration

Designing agent that can autonomously discover and learn a diversity of ...
research
07/08/2022

Automatic Exploration of Textual Environments with Language-Conditioned Autotelic Agents

In this extended abstract we discuss the opportunities and challenges of...
research
08/10/2020

GRIMGEP: Learning Progress for Robust Goal Sampling in Visual Deep Reinforcement Learning

Autonomous agents using novelty based goal exploration are often efficie...
research
10/15/2018

CURIOUS: Intrinsically Motivated Multi-Task, Multi-Goal Reinforcement Learning

In open-ended and changing environments, agents face a wide range of pot...

Please sign up or login with your details

Forgot password? Click here to reset