CtrlFormer: Learning Transferable State Representation for Visual Control via Transformer

06/17/2022
by   Yao Mu, et al.
0

Transformer has achieved great successes in learning vision and language representation, which is general across various downstream tasks. In visual control, learning transferable state representation that can transfer between different control tasks is important to reduce the training sample size. However, porting Transformer to sample-efficient visual control remains a challenging and unsolved problem. To this end, we propose a novel Control Transformer (CtrlFormer), possessing many appealing benefits that prior arts do not have. Firstly, CtrlFormer jointly learns self-attention mechanisms between visual tokens and policy tokens among different control tasks, where multitask representation can be learned and transferred without catastrophic forgetting. Secondly, we carefully design a contrastive reinforcement learning paradigm to train CtrlFormer, enabling it to achieve high sample efficiency, which is important in control problems. For example, in the DMControl benchmark, unlike recent advanced methods that failed by producing a zero score in the "Cartpole" task after transfer learning with 100k samples, CtrlFormer can achieve a state-of-the-art score with only 100k samples while maintaining the performance of previous tasks. The code and models are released in our project homepage.

READ FULL TEXT

page 8

page 9

page 16

page 17

research
03/03/2022

Multi-Tailed Vision Transformer for Efficient Inference

Recently, Vision Transformer (ViT) has achieved promising performance in...
research
10/02/2022

EUCLID: Towards Efficient Unsupervised Reinforcement Learning with Multi-choice Dynamics Model

Unsupervised reinforcement learning (URL) poses a promising paradigm to ...
research
06/07/2021

ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias

Transformers have shown great potential in various computer vision tasks...
research
11/19/2022

Rethinking Batch Sample Relationships for Data Representation: A Batch-Graph Transformer based Approach

Exploring sample relationships within each mini-batch has shown great po...
research
07/16/2021

Align before Fuse: Vision and Language Representation Learning with Momentum Distillation

Large-scale vision and language representation learning has shown promis...
research
05/20/2022

Visual Concepts Tokenization

Obtaining the human-like perception ability of abstracting visual concep...
research
12/26/2022

Toward Efficient Automated Feature Engineering

Automated Feature Engineering (AFE) refers to automatically generate and...

Please sign up or login with your details

Forgot password? Click here to reset