Representation Matters: The Game of Chess Poses a Challenge to Vision Transformers

04/28/2023
by   Johannes Czech, et al.
8

While transformers have gained the reputation as the "Swiss army knife of AI", no one has challenged them to master the game of chess, one of the classical AI benchmarks. Simply using vision transformers (ViTs) within AlphaZero does not master the game of chess, mainly because ViTs are too slow. Even making them more efficient using a combination of MobileNet and NextViT does not beat what actually matters: a simple change of the input representation and value loss, resulting in a greater boost of up to 180 Elo points over AlphaZero.

READ FULL TEXT
research
05/17/2021

Pay Attention to MLPs

Transformers have become one of the most important architectural innovat...
research
10/11/2022

Curved Representation Space of Vision Transformers

Neural networks with self-attention (a.k.a. Transformers) like ViT and S...
research
10/25/2022

Learning Explicit Object-Centric Representations with Vision Transformers

With the recent successful adaptation of transformers to the vision doma...
research
07/16/2023

Domain Generalisation with Bidirectional Encoder Representations from Vision Transformers

Domain generalisation involves pooling knowledge from source domain(s) i...
research
06/11/2021

Going Beyond Linear Transformers with Recurrent Fast Weight Programmers

Transformers with linearised attention ("linear Transformers") have demo...
research
05/08/2023

BiRT: Bio-inspired Replay in Vision Transformers for Continual Learning

The ability of deep neural networks to continually learn and adapt to a ...
research
10/07/2022

Game-Theoretic Understanding of Misclassification

This paper analyzes various types of image misclassification from a game...

Please sign up or login with your details

Forgot password? Click here to reset