Efficient Training of Visual Transformers with Small-Size Datasets

06/07/2021
by   Yahui Liu, et al.
17

Visual Transformers (VTs) are emerging as an architectural paradigm alternative to Convolutional networks (CNNs). Differently from CNNs, VTs can capture global relations between image elements and they potentially have a larger representation capacity. However, the lack of the typical convolutional inductive bias makes these models more data-hungry than common CNNs. In fact, some local properties of the visual domain which are embedded in the CNN architectural design, in VTs should be learned from samples. In this paper, we empirically analyse different VTs, comparing their robustness in a small training-set regime, and we show that, despite having a comparable accuracy when trained on ImageNet, their performance on smaller datasets can be largely different. Moreover, we propose a self-supervised task which can extract additional information from images with only a negligible computational overhead. This task encourages the VTs to learn spatial relations within an image and makes the VT training much more robust when training data are scarce. Our task is used jointly with the standard (supervised) training and it does not depend on specific architectural choices, thus it can be easily plugged in the existing VTs. Using an extensive evaluation with different VTs and datasets, we show that our method can improve (sometimes dramatically) the final accuracy of the VTs. The code will be available upon acceptance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2022

Training Vision Transformers with Only 2040 Images

Vision Transformers (ViTs) is emerging as an alternative to convolutiona...
research
12/12/2022

Masked autoencoders are effective solution to transformer data-hungry

Vision Transformers (ViTs) outperforms convolutional neural networks (CN...
research
06/08/2022

CASS: Cross Architectural Self-Supervision for Medical Image Analysis

Recent advances in Deep Learning and Computer Vision have alleviated man...
research
05/28/2022

WaveMix-Lite: A Resource-efficient Neural Network for Image Analysis

Gains in the ability to generalize on image analysis tasks for neural ne...
research
10/12/2021

Trivial or impossible – dichotomous data difficulty masks model differences (on ImageNet and beyond)

"The power of a generalization system follows directly from its biases" ...
research
06/28/2021

Early Convolutions Help Transformers See Better

Vision transformer (ViT) models exhibit substandard optimizability. In p...
research
10/05/2020

Mind the Pad – CNNs can Develop Blind Spots

We show how feature maps in convolutional networks are susceptible to sp...

Please sign up or login with your details

Forgot password? Click here to reset