Transformer in Transformer as Backbone for Deep Reinforcement Learning

12/30/2022
by   Hangyu Mao, et al.
0

Designing better deep networks and better reinforcement learning (RL) algorithms are both important for deep RL. This work focuses on the former. Previous methods build the network with several modules like CNN, LSTM and Attention. Recent methods combine the Transformer with these modules for better performance. However, it requires tedious optimization skills to train a network composed of mixed modules, making these methods inconvenient to be used in practice. In this paper, we propose to design pure Transformer-based networks for deep RL, aiming at providing off-the-shelf backbones for both the online and offline settings. Specifically, the Transformer in Transformer (TIT) backbone is proposed, which cascades two Transformers in a very natural way: the inner one is used to process a single observation, while the outer one is responsible for processing the observation history; combining both is expected to extract spatial-temporal representations for good decision-making. Experiments show that TIT can achieve satisfactory performance in different settings consistently.

READ FULL TEXT

page 4

page 8

page 18

research
03/07/2023

Graph Decision Transformer

Offline reinforcement learning (RL) is a challenging task, whose objecti...
research
06/27/2021

A Reinforcement Learning Approach for Sequential Spatial Transformer Networks

Spatial Transformer Networks (STN) can generate geometric transformation...
research
10/23/2020

Stabilizing Transformer-Based Action Sequence Generation For Q-Learning

Since the publication of the original Transformer architecture (Vaswani ...
research
06/26/2023

Supervised Pretraining Can Learn In-Context Reinforcement Learning

Large transformer models trained on diverse datasets have shown a remark...
research
12/29/2022

On Transforming Reinforcement Learning by Transformer: The Development Trajectory

Transformer, originally devised for natural language processing, has als...
research
12/15/2021

Feature-Attending Recurrent Modules for Generalization in Reinforcement Learning

Deep reinforcement learning (Deep RL) has recently seen significant prog...
research
09/02/2019

Logic and the 2-Simplicial Transformer

We introduce the 2-simplicial Transformer, an extension of the Transform...

Please sign up or login with your details

Forgot password? Click here to reset