Locally Constrained Representations in Reinforcement Learning

09/20/2022
by   Somjit Nath, et al.
12

The success of Reinforcement Learning (RL) heavily relies on the ability to learn robust representations from the observations of the environment. In most cases, the representations learned purely by the reinforcement learning loss can differ vastly across states depending on how the value functions change. However, the representations learned need not be very specific to the task at hand. Relying only on the RL objective may yield representations that vary greatly across successive time steps. In addition, since the RL loss has a changing target, the representations learned would depend on how good the current values/policies are. Thus, disentangling the representations from the main task would allow them to focus more on capturing transition dynamics which can improve generalization. To this end, we propose locally constrained representations, where an auxiliary loss forces the state representations to be predictable by the representations of the neighbouring states. This encourages the representations to be driven not only by the value/policy learning but also self-supervised learning, which constrains the representations from changing too rapidly. We evaluate the proposed method on several known benchmarks and observe strong performance. Especially in continuous control tasks, our experiments show a significant advantage over a strong baseline.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/12/2022

Temporal Disentanglement of Representations for Improved Generalisation in Reinforcement Learning

In real-world robotics applications, Reinforcement Learning (RL) agents ...
research
06/22/2023

TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning

Despite recent progress in reinforcement learning (RL) from raw pixel da...
research
12/28/2022

Representation Learning in Deep RL via Discrete Information Bottleneck

Several self-supervised representation learning methods have been propos...
research
04/17/2021

A Self-Supervised Auxiliary Loss for Deep RL in Partially Observable Settings

In this work we explore an auxiliary loss useful for reinforcement learn...
research
02/15/2022

L2C2: Locally Lipschitz Continuous Constraint towards Stable and Smooth Reinforcement Learning

This paper proposes a new regularization technique for reinforcement lea...
research
11/04/2019

An End-to-End Deep RL Framework for Task Arrangement in Crowdsourcing Platforms

In this paper, we propose a Deep Reinforcement Learning (RL) framework f...
research
10/12/2022

Reinforcement Learning with Automated Auxiliary Loss Search

A good state representation is crucial to solving complicated reinforcem...

Please sign up or login with your details

Forgot password? Click here to reset