Masked Contrastive Representation Learning for Reinforcement Learning

10/15/2020
by   Jinhua Zhu, et al.
0

Improving sample efficiency is a key research problem in reinforcement learning (RL), and CURL, which uses contrastive learning to extract high-level features from raw pixels of individual video frames, is an efficient algorithm <cit.>. We observe that consecutive video frames in a game are highly correlated but CURL deals with them independently. To further improve data efficiency, we propose a new algorithm, masked contrastive representation learning for RL, that takes the correlation among consecutive inputs into consideration. In addition to the CNN encoder and the policy network in CURL, our method introduces an auxiliary Transformer module to leverage the correlations among video frames. During training, we randomly mask the features of several frames, and use the CNN encoder and Transformer to reconstruct them based on the context frames. The CNN encoder and Transformer are jointly trained via contrastive learning where the reconstructed features should be similar to the ground-truth ones while dissimilar to others. During inference, the CNN encoder and the policy network are used to take actions, and the Transformer module is discarded. Our method achieves consistent improvements over CURL on 14 out of 16 environments from DMControl suite and 21 out of 26 environments from Atari 2600 Games. The code is available at https://github.com/teslacool/m-curl.

READ FULL TEXT
research
03/15/2021

Sample-efficient Reinforcement Learning Representation Learning with Curiosity Contrastive Forward Dynamics Model

Developing an agent in reinforcement learning (RL) that is capable of pe...
research
07/12/2021

CoBERL: Contrastive BERT for Reinforcement Learning

Many reinforcement learning (RL) agents require a large amount of experi...
research
03/29/2022

VPTR: Efficient Transformers for Video Prediction

In this paper, we propose a new Transformer block for video future frame...
research
07/29/2022

Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement Learning

In view of its power in extracting feature representation, contrastive s...
research
04/08/2020

CURL: Contrastive Unsupervised Representations for Reinforcement Learning

We present CURL: Contrastive Unsupervised Representations for Reinforcem...
research
01/31/2023

CRC-RL: A Novel Visual Feature Representation Architecture for Unsupervised Reinforcement Learning

This paper addresses the problem of visual feature representation learni...
research
04/11/2022

Evaluating Vision Transformer Methods for Deep Reinforcement Learning from Pixels

Vision Transformers (ViT) have recently demonstrated the significant pot...

Please sign up or login with your details

Forgot password? Click here to reset