Log In Sign Up

Understanding and Improving Fast Adversarial Training

by   Maksym Andriushchenko, et al.

A recent line of work focused on making adversarial training computationally efficient for deep learning models. In particular, Wong et al. (2020) showed that ℓ_∞-adversarial training with fast gradient sign method (FGSM) can fail due to a phenomenon called "catastrophic overfitting", when the model quickly loses its robustness over a single epoch of training. We show that adding a random step to FGSM, as proposed in Wong et al. (2020), does not prevent catastrophic overfitting, and that randomness is not important per se – its main role being simply to reduce the magnitude of the perturbation. Moreover, we show that catastrophic overfitting is not inherent to deep and overparametrized networks, but can occur in a single-layer convolutional network with a few filters. In an extreme case, even a single filter can make the network highly non-linear locally, which is the main reason why FGSM training fails. Based on this observation, we propose a new regularization method, GradAlign, that prevents catastrophic overfitting by explicitly maximizing the gradient alignment inside the perturbation set and improves the quality of the FGSM solution. As a result, GradAlign allows to successfully apply FGSM training also for larger ℓ_∞-perturbations and reduce the gap to multi-step adversarial training. The code of our experiments is available at


page 22

page 23


Understanding Catastrophic Overfitting in Single-step Adversarial Training

Adversarial examples are perturbed inputs that are designed to deceive m...

Make Some Noise: Reliable and Efficient Single-Step Adversarial Training

Recently, Wong et al. showed that adversarial training with single-step ...

Bag of Tricks for FGSM Adversarial Training

Adversarial training (AT) with samples generated by Fast Gradient Sign M...

Local Linearity and Double Descent in Catastrophic Overfitting

Catastrophic overfitting is a phenomenon observed during Adversarial Tra...

Catastrophic overfitting is a bug but also a feature

Despite clear computational advantages in building robust neural network...

Subspace Adversarial Training

Single-step adversarial training (AT) has received wide attention as it ...

Prior-Guided Adversarial Initialization for Fast Adversarial Training

Fast adversarial training (FAT) effectively improves the efficiency of s...

Code Repositories


Understanding and Improving Fast Adversarial Training

view repo