Deep Reinforcement Learning with Swin Transformer

06/30/2022
by   Li Meng, et al.
17

Transformers are neural network models that utilize multiple layers of self-attention heads. Attention is implemented in transformers as the contextual embeddings of the 'key' and 'query'. Transformers allow the re-combination of attention information from different layers and the processing of all inputs at once, which are more convenient than recurrent neural networks when dealt with a large number of data. Transformers have exhibited great performances on natural language processing tasks in recent years. Meanwhile, there have been tremendous efforts to adapt transformers into other fields of machine learning, such as Swin Transformer and Decision Transformer. Swin Transformer is a promising neural network architecture that splits image pixels into small patches and applies local self-attention operations inside the (shifted) windows of fixed sizes. Decision Transformer has successfully applied transformers to off-line reinforcement learning and showed that random-walk samples from Atari games are sufficient to let an agent learn optimized behaviors. However, it is considerably more challenging to combine online reinforcement learning with transformers. In this article, we further explore the possibility of not modifying the reinforcement learning policy, but only replacing the convolutional neural network architecture with the self-attention architecture from Swin Transformer. Namely, we target at changing how an agent views the world, but not how an agent plans about the world. We conduct our experiment on 49 games in Arcade Learning Environment. The results show that using Swin Transformer in reinforcement learning achieves significantly higher evaluation scores across the majority of games in Arcade Learning Environment. Thus, we conclude that online reinforcement learning can benefit from exploiting self-attentions with spatial token embeddings.

READ FULL TEXT
research
02/18/2023

Transformadores: Fundamentos teoricos y Aplicaciones

Transformers are a neural network architecture originally designed for n...
research
02/01/2022

Improving Sample Efficiency of Value Based Models Using Attention and Vision Transformers

Much of recent Deep Reinforcement Learning success is owed to the neural...
research
09/01/2022

Transformers are Sample Efficient World Models

Deep reinforcement learning agents are notoriously sample inefficient, w...
research
06/14/2022

Transformers are Meta-Reinforcement Learners

The transformer architecture and variants presented remarkable success a...
research
09/07/2021

nnFormer: Interleaved Transformer for Volumetric Segmentation

Transformers, the default model of choices in natural language processin...
research
04/11/2022

Evaluating Vision Transformer Methods for Deep Reinforcement Learning from Pixels

Vision Transformers (ViT) have recently demonstrated the significant pot...
research
05/11/2021

Hierarchical RNNs-Based Transformers MADDPG for Mixed Cooperative-Competitive Environments

At present, attention mechanism has been widely applied to the fields of...

Please sign up or login with your details

Forgot password? Click here to reset