Three things everyone should know about Vision Transformers

03/18/2022
by   Hugo Touvron, et al.
21

After their initial success in natural language processing, transformer architectures have rapidly gained traction in computer vision, providing state-of-the-art results for tasks such as image classification, detection, segmentation, and video analysis. We offer three insights based on simple and easy to implement variants of vision transformers. (1) The residual layers of vision transformers, which are usually processed sequentially, can to some extent be processed efficiently in parallel without noticeably affecting the accuracy. (2) Fine-tuning the weights of the attention layers is sufficient to adapt vision transformers to a higher resolution and to other classification tasks. This saves compute, reduces the peak memory consumption at fine-tuning time, and allows sharing the majority of weights across tasks. (3) Adding MLP-based patch pre-processing layers improves Bert-like self-supervised training based on patch masking. We evaluate the impact of these design choices using the ImageNet-1k dataset, and confirm our findings on the ImageNet-v2 test set. Transfer performance is measured across six smaller datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/22/2023

Sparse then Prune: Toward Efficient Vision Transformers

The Vision Transformer architecture is a deep learning model inspired by...
research
10/11/2021

Investigating Transfer Learning Capabilities of Vision Transformers and CNNs by Fine-Tuning a Single Trainable Block

In recent developments in the field of Computer Vision, a rise is seen i...
research
12/17/2020

Toward Transformer-Based Object Detection

Transformers have become the dominant model in natural language processi...
research
12/29/2021

Fine-Tuning Transformers: Vocabulary Transfer

Transformers are responsible for the vast majority of recent advances in...
research
03/22/2022

Self-supervision through Random Segments with Autoregressive Coding (RandSAC)

Inspired by the success of self-supervised autoregressive representation...
research
09/28/2021

Fine-tuning Vision Transformers for the Prediction of State Variables in Ising Models

Transformers are state-of-the-art deep learning models that are composed...
research
06/28/2021

Early Convolutions Help Transformers See Better

Vision transformer (ViT) models exhibit substandard optimizability. In p...

Please sign up or login with your details

Forgot password? Click here to reset