Catastrophic overfitting is a bug but also a feature

Despite clear computational advantages in building robust neural networks, adversarial training (AT) using single-step methods is unstable as it suffers from catastrophic overfitting (CO): Networks gain non-trivial robustness during the first stages of adversarial training, but suddenly reach a breaking point where they quickly lose all robustness in just a few iterations. Although some works have succeeded at preventing CO, the different mechanisms that lead to this remarkable failure mode are still poorly understood. In this work, however, we find that the interplay between the structure of the data and the dynamics of AT plays a fundamental role in CO. Specifically, through active interventions on typical datasets of natural images, we establish a causal link between the structure of the data and the onset of CO in single-step AT methods. This new perspective provides important insights into the mechanisms that lead to CO and paves the way towards a better understanding of the general dynamics of robust model construction. The code to reproduce the experiments of this paper can be found at https://github.com/gortizji/co_features .

READ FULL TEXT

page 7

page 11

page 12

page 16

page 17

page 18

page 19

page 21

research
11/24/2021

Subspace Adversarial Training

Single-step adversarial training (AT) has received wide attention as it ...
research
07/06/2020

Understanding and Improving Fast Adversarial Training

A recent line of work focused on making adversarial training computation...
research
06/26/2021

Multi-stage Optimization based Adversarial Training

In the field of adversarial robustness, there is a common practice that ...
research
10/11/2022

Stable and Efficient Adversarial Training through Local Linearization

There has been a recent surge in single-step adversarial training as it ...
research
09/06/2022

Bag of Tricks for FGSM Adversarial Training

Adversarial training (AT) with samples generated by Fast Gradient Sign M...
research
08/24/2023

Fast Adversarial Training with Smooth Convergence

Fast adversarial training (FAT) is beneficial for improving the adversar...
research
02/23/2023

Investigating Catastrophic Overfitting in Fast Adversarial Training: A Self-fitting Perspective

Although fast adversarial training provides an efficient approach for bu...

Please sign up or login with your details

Forgot password? Click here to reset