Log In Sign Up

Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills

by   Victor Campos, et al.

Acquiring abilities in the absence of a task-oriented reward function is at the frontier of reinforcement learning research. This problem has been studied through the lens of empowerment, which draws a connection between option discovery and information theory. Information-theoretic skill discovery methods have garnered much interest from the community, but little research has been conducted in understanding their limitations. Through theoretical analysis and empirical evidence, we show that existing algorithms suffer from a common limitation – they discover options that provide a poor coverage of the state space. In light of this, we propose 'Explore, Discover and Learn' (EDL), an alternative approach to information-theoretic skill discovery. Crucially, EDL optimizes the same information-theoretic objective derived from the empowerment literature, but addresses the optimization problem using different machinery. We perform an extensive evaluation of skill discovery methods on controlled environments and show that EDL offers significant advantages, such as overcoming the coverage problem, reducing the dependence of learned skills on the initial state, and allowing the user to define a prior over which behaviors should be learned.


page 16

page 17


Unsupervised Skill Discovery with Bottleneck Option Learning

Having the ability to acquire inherent skills from environments without ...

Diversity is All You Need: Learning Skills without a Reward Function

Intelligent creatures can explore their environments and learn useful sk...

Direct then Diffuse: Incremental Unsupervised Skill Discovery for State Covering and Goal Reaching

Learning meaningful behaviors in the absence of reward is a difficult pr...

Unsupervised Skill-Discovery and Skill-Learning in Minecraft

Pre-training Reinforcement Learning agents in a task-agnostic manner has...

Bayesian Nonparametrics for Offline Skill Discovery

Skills or low-level policies in reinforcement learning are temporally ex...

ODPP: A Unified Algorithm Framework for Unsupervised Option Discovery based on Determinantal Point Process

Learning rich skills through temporal abstractions without supervision o...

Temporal Abstractions-Augmented Temporally Contrastive Learning: An Alternative to the Laplacian in RL

In reinforcement learning, the graph Laplacian has proved to be a valuab...

Code Repositories


Code for "Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills"

view repo