When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture

10/14/2022
by   Yichuan Mo, et al.
0

Vision Transformers (ViTs) have recently achieved competitive performance in broad vision tasks. Unfortunately, on popular threat models, naturally trained ViTs are shown to provide no more adversarial robustness than convolutional neural networks (CNNs). Adversarial training is still required for ViTs to defend against such adversarial attacks. In this paper, we provide the first and comprehensive study on the adversarial training recipe of ViTs via extensive evaluation of various training techniques across benchmark datasets. We find that pre-training and SGD optimizer are necessary for ViTs' adversarial training. Further considering ViT as a new type of model architecture, we investigate its adversarial robustness from the perspective of its unique architectural components. We find, when randomly masking gradients from some attention blocks or masking perturbations on some patches during adversarial training, the adversarial robustness of ViTs can be remarkably improved, which may potentially open up a line of work to explore the architectural information inside the newly designed models like ViTs. Our code is available at https://github.com/mo666666/When-Adversarial-Training-Meets-Vision-Transformers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2022

On the interplay of adversarial robustness and architecture components: patches, convolution and attention

In recent years novel architecture components for image classification h...
research
09/15/2022

A Light Recipe to Train Robust Vision Transformers

In this paper, we ask whether Vision Transformers (ViTs) can serve as an...
research
03/29/2021

On the Adversarial Robustness of Visual Transformers

Following the success in advancing natural language processing and under...
research
06/16/2023

Group Orthogonalization Regularization For Vision Models Adaptation and Robustness

As neural networks become deeper, the redundancy within their parameters...
research
09/28/2022

Exploring the Relationship between Architecture and Adversarially Robust Generalization

Adversarial training has been demonstrated to be one of the most effecti...
research
11/10/2021

Are Transformers More Robust Than CNNs?

Transformer emerges as a powerful tool for visual recognition. In additi...
research
08/19/2023

Robust Mixture-of-Expert Training for Convolutional Neural Networks

Sparsely-gated Mixture of Expert (MoE), an emerging deep model architect...

Please sign up or login with your details

Forgot password? Click here to reset