Noise Augmentation Is All You Need For FGSM Fast Adversarial Training: Catastrophic Overfitting And Robust Overfitting Require Different Augmentation

02/11/2022
by   Chaoning Zhang, et al.
0

Adversarial training (AT) and its variants are the most effective approaches for obtaining adversarially robust models. A unique characteristic of AT is that an inner maximization problem needs to be solved repeatedly before the model weights can be updated, which makes the training slow. FGSM AT significantly improves its efficiency but it fails when the step size grows. The SOTA GradAlign makes FGSM AT compatible with a higher step size, however, its regularization on input gradient makes it 3 to 4 times slower than FGSM AT. Our proposed NoiseAug removes the extra computation overhead by directly regularizing on the input itself. The key contribution of this work lies in an empirical finding that single-step FGSM AT is not as hard as suggested in the past line of work: noise augmentation is all you need for (FGSM) fast AT. Towards understanding the success of our NoiseAug, we perform an extensive analysis and find that mitigating Catastrophic Overfitting (CO) and Robust Overfitting (RO) need different augmentations. Instead of more samples caused by data augmentation, we identify what makes NoiseAug effective for preventing CO might lie in its improved local linearity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2022

Fast Adversarial Training with Adaptive Step Size

While adversarial training and its variants have shown to be the most ef...
research
08/22/2023

Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging

Fast Adversarial Training (FAT) not only improves the model robustness b...
research
06/26/2021

Multi-stage Optimization based Adversarial Training

In the field of adversarial robustness, there is a common practice that ...
research
11/21/2021

Local Linearity and Double Descent in Catastrophic Overfitting

Catastrophic overfitting is a phenomenon observed during Adversarial Tra...
research
03/29/2021

ZeroGrad : Mitigating and Explaining Catastrophic Overfitting in FGSM Adversarial Training

Making deep neural networks robust to small adversarial noises has recen...
research
04/01/2023

Improving Fast Adversarial Training with Prior-Guided Knowledge

Fast adversarial training (FAT) is an efficient method to improve robust...
research
06/13/2023

Rethinking Adversarial Training with A Simple Baseline

We report competitive results on RobustBench for CIFAR and SVHN using a ...

Please sign up or login with your details

Forgot password? Click here to reset