Temporal Abstractions-Augmented Temporally Contrastive Learning: An Alternative to the Laplacian in RL

03/21/2022
by   Akram Erraqabi, et al.
0

In reinforcement learning, the graph Laplacian has proved to be a valuable tool in the task-agnostic setting, with applications ranging from skill discovery to reward shaping. Recently, learning the Laplacian representation has been framed as the optimization of a temporally-contrastive objective to overcome its computational limitations in large (or continuous) state spaces. However, this approach requires uniform access to all states in the state space, overlooking the exploration problem that emerges during the representation learning process. In this work, we propose an alternative method that is able to recover, in a non-uniform-prior setting, the expressiveness and the desired properties of the Laplacian representation. We do so by combining the representation learning with a skill-based covering policy, which provides a better training distribution to extend and refine the representation. We also show that a simple augmentation of the representation objective with the learned temporal abstractions improves dynamics-awareness and helps exploration. We find that our method succeeds as an alternative to the Laplacian in the non-uniform setting and scales to challenging continuous control environments. Finally, even if our method is not optimized for skill discovery, the learned skills can successfully solve difficult continuous navigation tasks with sparse rewards, where standard skill discovery approaches are no so effective.

READ FULL TEXT
research
07/12/2021

Towards Better Laplacian Representation in Reinforcement Learning with Generalized Graph Drawing

The Laplacian representation recently gains increasing attention for rei...
research
06/06/2021

DisTop: Discovering a Topological representation to learn diverse and rewarding skills

The optimal way for a deep reinforcement learning (DRL) agent to explore...
research
07/18/2021

Unsupervised Skill-Discovery and Skill-Learning in Minecraft

Pre-training Reinforcement Learning agents in a task-agnostic manner has...
research
07/21/2023

Scalable Multi-agent Covering Option Discovery based on Kronecker Graphs

Covering skill (a.k.a., option) discovery has been developed to improve ...
research
10/24/2022

Reachability-Aware Laplacian Representation in Reinforcement Learning

In Reinforcement Learning (RL), Laplacian Representation (LapRep) is a t...
research
02/10/2020

Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills

Acquiring abilities in the absence of a task-oriented reward function is...
research
01/13/2023

Time-Myopic Go-Explore: Learning A State Representation for the Go-Explore Paradigm

Very large state spaces with a sparse reward signal are difficult to exp...

Please sign up or login with your details

Forgot password? Click here to reset