Make Some Noise: Reliable and Efficient Single-Step Adversarial Training

02/02/2022
by   Pau de Jorge, et al.
11

Recently, Wong et al. showed that adversarial training with single-step FGSM leads to a characteristic failure mode named catastrophic overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks. They showed that adding a random perturbation prior to FGSM (RS-FGSM) seemed to be sufficient to prevent CO. However, Andriushchenko and Flammarion observed that RS-FGSM still leads to CO for larger perturbations, and proposed an expensive regularizer (GradAlign) to avoid CO. In this work, we methodically revisit the role of noise and clipping in single-step adversarial training. Contrary to previous intuitions, we find that using a stronger noise around the clean sample combined with not clipping is highly effective in avoiding CO for large perturbation radii. Based on these observations, we then propose Noise-FGSM (N-FGSM) that, while providing the benefits of single-step adversarial training, does not suffer from CO. Empirical analyses on a large suite of experiments show that N-FGSM is able to match or surpass the performance of previous single-step methods while achieving a 3× speed-up.

READ FULL TEXT

page 6

page 14

page 15

research
07/06/2020

Understanding and Improving Fast Adversarial Training

A recent line of work focused on making adversarial training computation...
research
05/06/2021

Understanding Catastrophic Overfitting in Adversarial Training

Recently, FGSM adversarial training is found to be able to train a robus...
research
06/26/2021

Multi-stage Optimization based Adversarial Training

In the field of adversarial robustness, there is a common practice that ...
research
10/11/2022

Stable and Efficient Adversarial Training through Local Linearization

There has been a recent surge in single-step adversarial training as it ...
research
04/04/2021

Reliably fast adversarial training via latent adversarial perturbation

While multi-step adversarial training is widely popular as an effective ...
research
10/15/2020

Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training

Our goal is to understand why the robustness drops after conducting adve...
research
02/23/2023

Investigating Catastrophic Overfitting in Fast Adversarial Training: A Self-fitting Perspective

Although fast adversarial training provides an efficient approach for bu...

Please sign up or login with your details

Forgot password? Click here to reset