Understanding Robustness of Transformers for Image Classification

03/26/2021
by   Srinadh Bhojanapalli, et al.
0

Deep Convolutional Neural Networks (CNNs) have long been the architecture of choice for computer vision tasks. Recently, Transformer-based architectures like Vision Transformer (ViT) have matched or even surpassed ResNets for image classification. However, details of the Transformer architecture – such as the use of non-overlapping patches – lead one to wonder whether these networks are as robust. In this paper, we perform an extensive study of a variety of different measures of robustness of ViT models and compare the findings to ResNet baselines. We investigate robustness to input perturbations as well as robustness to model perturbations. We find that when pre-trained with a sufficient amount of data, ViT models are at least as robust as the ResNet counterparts on a broad range of perturbations. We also find that Transformers are robust to the removal of almost any single layer, and that while activations from later layers are highly correlated with each other, they nevertheless play an important role in classification.

READ FULL TEXT

page 1

page 6

page 7

page 20

page 21

page 22

research
05/04/2021

MLP-Mixer: An all-MLP Architecture for Vision

Convolutional Neural Networks (CNNs) are the go-to model for computer vi...
research
03/17/2022

Are Vision Transformers Robust to Spurious Correlations?

Deep neural networks may be susceptible to learning spurious correlation...
research
04/25/2023

iMixer: hierarchical Hopfield network implies an invertible, implicit and iterative MLP-Mixer

In the last few years, the success of Transformers in computer vision ha...
research
06/24/2021

Exploring Corruption Robustness: Inductive Biases in Vision Transformers and MLP-Mixers

Recently, vision transformers and MLP-based models have been developed i...
research
01/25/2019

Equivariant Transformer Networks

How can prior knowledge on the transformation invariances of a domain be...
research
01/27/2023

Robust Transformer with Locality Inductive Bias and Feature Normalization

Vision transformers have been demonstrated to yield state-of-the-art res...
research
04/02/2021

LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference

We design a family of image classification architectures that optimize t...

Please sign up or login with your details

Forgot password? Click here to reset