On the Adversarial Robustness of Visual Transformers

03/29/2021
by   Rulin Shao, et al.
0

Following the success in advancing natural language processing and understanding, transformers are expected to bring revolutionary changes to computer vision. This work provides the first and comprehensive study on the robustness of vision transformers (ViTs) against adversarial perturbations. Tested on various white-box and transfer attack settings, we find that ViTs possess better adversarial robustness when compared with convolutional neural networks (CNNs). We summarize the following main observations contributing to the improved robustness of ViTs: 1) Features learned by ViTs contain less low-level information and are more generalizable, which contributes to superior robustness against adversarial perturbations. 2) Introducing convolutional or tokens-to-token blocks for learning low-level features in ViTs can improve classification accuracy but at the cost of adversarial robustness. 3) Increasing the proportion of transformers in the model structure (when the model consists of both transformer and CNN blocks) leads to better robustness. But for a pure transformer model, simply increasing the size or adding layers cannot guarantee a similar effect. 4) Pre-training on larger datasets does not significantly improve adversarial robustness though it is critical for training ViTs. 5) Adversarial training is also applicable to ViT for training robust models. Furthermore, feature visualization and frequency analysis are conducted for explanation. The results show that ViTs are less sensitive to high-frequency perturbations than CNNs and there is a high correlation between how well the model learns low-level features and its robustness against different frequency-based perturbations.

READ FULL TEXT

page 4

page 6

page 8

research
10/14/2022

When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture

Vision Transformers (ViTs) have recently achieved competitive performanc...
research
03/22/2021

Incorporating Convolution Designs into Visual Transformers

Motivated by the success of Transformers in natural language processing ...
research
01/27/2023

Robust Transformer with Locality Inductive Bias and Feature Normalization

Vision transformers have been demonstrated to yield state-of-the-art res...
research
11/30/2017

Measuring the tendency of CNNs to Learn Surface Statistical Regularities

Deep CNNs are known to exhibit the following peculiarity: on the one han...
research
04/26/2021

Impact of Spatial Frequency Based Constraints on Adversarial Robustness

Adversarial examples mainly exploit changes to input pixels to which hum...
research
12/30/2021

Stochastic Layers in Vision Transformers

We introduce fully stochastic layers in vision transformers, without cau...
research
08/20/2023

Improving Adversarial Robustness of Masked Autoencoders via Test-time Frequency-domain Prompting

In this paper, we investigate the adversarial robustness of vision trans...

Please sign up or login with your details

Forgot password? Click here to reset