Temporal Alignment for History Representation in Reinforcement Learning

04/07/2022
by   Aleksandr Ermolov, et al.
0

Environments in Reinforcement Learning are usually only partially observable. To address this problem, a possible solution is to provide the agent with information about the past. However, providing complete observations of numerous steps can be excessive. Inspired by human memory, we propose to represent history with only important changes in the environment and, in our approach, to obtain automatically this representation using self-supervision. Our method (TempAl) aligns temporally-close frames, revealing a general, slowly varying state of the environment. This procedure is based on contrastive loss, which pulls embeddings of nearby observations to each other while pushing away other samples from the batch. It can be interpreted as a metric that captures the temporal relations of observations. We propose to combine both common instantaneous and our history representation and we evaluate TempAl on all available Atari games from the Arcade Learning Environment. TempAl surpasses the instantaneous-only baseline in 35 environments out of 49. The source code of the method and of all the experiments is available at https://github.com/htdt/tempal.

READ FULL TEXT
research
05/24/2022

History Compression via Language Models in Reinforcement Learning

In a partially observable Markov decision process (POMDP), an agent typi...
research
10/05/2020

Latent World Models For Intrinsically Motivated Exploration

In this work we consider partially observable environments with sparse r...
research
08/17/2020

SuperSuit: Simple Microwrappers for Reinforcement Learning Environments

In reinforcement learning, wrappers are universally used to transform th...
research
06/03/2023

MA2CL:Masked Attentive Contrastive Learning for Multi-Agent Reinforcement Learning

Recent approaches have utilized self-supervised auxiliary tasks as repre...
research
10/25/2021

Recurrent Off-policy Baselines for Memory-based Continuous Control

When the environment is partially observable (PO), a deep reinforcement ...
research
03/03/2023

POPGym: Benchmarking Partially Observable Reinforcement Learning

Real world applications of Reinforcement Learning (RL) are often partial...
research
12/19/2017

Scale-invariant temporal history (SITH): optimal slicing of the past in an uncertain world

In both the human brain and any general artificial intelligence (AI), a ...

Please sign up or login with your details

Forgot password? Click here to reset