A Light Recipe to Train Robust Vision Transformers

09/15/2022
by   Edoardo Debenedetti, et al.
18

In this paper, we ask whether Vision Transformers (ViTs) can serve as an underlying architecture for improving the adversarial robustness of machine learning models against evasion attacks. While earlier works have focused on improving Convolutional Neural Networks, we show that also ViTs are highly suitable for adversarial training to achieve competitive performance. We achieve this objective using a custom adversarial training recipe, discovered using rigorous ablation studies on a subset of the ImageNet dataset. The canonical training recipe for ViTs recommends strong data augmentation, in part to compensate for the lack of vision inductive bias of attention modules, when compared to convolutions. We show that this recipe achieves suboptimal performance when used for adversarial training. In contrast, we find that omitting all heavy data augmentation, and adding some additional bag-of-tricks (ε-warmup and larger weight decay), significantly boosts the performance of robust ViTs. We show that our recipe generalizes to different classes of ViT architectures and large-scale models on full ImageNet-1k. Additionally, investigating the reasons for the robustness of our models, we show that it is easier to generate strong attacks during training when using our recipe and that this leads to better robustness at test time. Finally, we further study one consequence of adversarial training by proposing a way to quantify the semantic nature of adversarial perturbations and highlight its correlation with the robustness of the model. Overall, we recommend that the community should avoid translating the canonical training recipes in ViTs to robust training and rethink common training choices in the context of adversarial training.

READ FULL TEXT

page 1

page 12

page 18

page 19

page 28

research
10/14/2022

When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture

Vision Transformers (ViTs) have recently achieved competitive performanc...
research
04/22/2023

Improving Stain Invariance of CNNs for Segmentation by Fusing Channel Attention and Domain-Adversarial Training

Variability in staining protocols, such as different slide preparation t...
research
04/10/2018

Adversarial Training Versus Weight Decay

Performance-critical machine learning models should be robust to input p...
research
11/30/2021

Pyramid Adversarial Training Improves ViT Performance

Aggressive data augmentation is a key component of the strong generaliza...
research
07/21/2022

Towards Efficient Adversarial Training on Vision Transformers

Vision Transformer (ViT), as a powerful alternative to Convolutional Neu...
research
04/03/2021

Property-driven Training: All You (N)Ever Wanted to Know About

Neural networks are known for their ability to detect general patterns i...
research
01/25/2023

A Study on FGSM Adversarial Training for Neural Retrieval

Neural retrieval models have acquired significant effectiveness gains ov...

Please sign up or login with your details

Forgot password? Click here to reset