Improving Sample Efficiency of Value Based Models Using Attention and Vision Transformers

02/01/2022
by   Amir Ardalan Kalantari, et al.
0

Much of recent Deep Reinforcement Learning success is owed to the neural architecture's potential to learn and use effective internal representations of the world. While many current algorithms access a simulator to train with a large amount of data, in realistic settings, including while playing games that may be played against people, collecting experience can be quite costly. In this paper, we introduce a deep reinforcement learning architecture whose purpose is to increase sample efficiency without sacrificing performance. We design this architecture by incorporating advances achieved in recent years in the field of Natural Language Processing and Computer Vision. Specifically, we propose a visually attentive model that uses transformers to learn a self-attention mechanism on the feature maps of the state representation, while simultaneously optimizing return. We demonstrate empirically that this architecture improves sample complexity for several Atari environments, while also achieving better performance in some of the games.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2022

Deep Reinforcement Learning with Swin Transformer

Transformers are neural network models that utilize multiple layers of s...
research
08/03/2021

Armour: Generalizable Compact Self-Attention for Vision Transformers

Attention-based transformer networks have demonstrated promising potenti...
research
06/14/2022

Transformers are Meta-Reinforcement Learners

The transformer architecture and variants presented remarkable success a...
research
09/22/2022

Pretraining the Vision Transformer using self-supervised methods for vision based Deep Reinforcement Learning

The Vision Transformer architecture has shown to be competitive in the c...
research
10/11/2021

Leveraging Transformers for StarCraft Macromanagement Prediction

Inspired by the recent success of transformers in natural language proce...
research
11/11/2018

An Initial Attempt of Combining Visual Selective Attention with Deep Reinforcement Learning

Visual attention serves as a means of feature selection mechanism in the...
research
03/31/2022

A 23 MW data centre is all you need

The field of machine learning has achieved striking progress in recent y...

Please sign up or login with your details

Forgot password? Click here to reset