Multi-Game Decision Transformers

05/30/2022
by   Kuang-Huei Lee, et al.
4

A longstanding goal of the field of AI is a strategy for compiling diverse experience into a highly capable, generalist agent. In the subfields of vision and language, this was largely achieved by scaling up transformer-based models and training them on large, diverse datasets. Motivated by this progress, we investigate whether the same strategy can be used to produce generalist reinforcement learning agents. Specifically, we show that a single transformer-based model - with a single set of weights - trained purely offline can play a suite of up to 46 Atari games simultaneously at close-to-human performance. When trained and evaluated appropriately, we find that the same trends observed in language and vision hold, including scaling of performance with model size and rapid adaptation to new games via fine-tuning. We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning, and find that our Multi-Game Decision Transformer models offer the best scalability and performance. We release the pre-trained models and code to encourage further research in this direction. Additional information, videos and code can be seen at: sites.google.com/view/multi-game-transformers

READ FULL TEXT

page 2

page 9

page 19

page 21

research
11/17/2022

On the Effect of Pre-training for Transformer in Different Modality on Offline Reinforcement Learning

We empirically investigate how pre-training on data of different modalit...
research
08/20/2023

Large Transformers are Better EEG Learners

Pre-trained large transformer models have achieved remarkable performanc...
research
11/15/2022

FedTune: A Deep Dive into Efficient Federated Fine-Tuning with Pre-trained Transformers

Federated Learning (FL) is an emerging paradigm that enables distributed...
research
02/10/2023

Long-Context Language Decision Transformers and Exponential Tilt for Interactive Text Environments

Text-based game environments are challenging because agents must deal wi...
research
11/28/2022

Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes

The potential of offline reinforcement learning (RL) is that high-capaci...
research
08/02/2020

The Chess Transformer: Mastering Play using Generative Language Models

This work demonstrates that natural language transformers can support mo...
research
09/07/2021

Puzzle Solving without Search or Human Knowledge: An Unnatural Language Approach

The application of Generative Pre-trained Transformer (GPT-2) to learn t...

Please sign up or login with your details

Forgot password? Click here to reset