Stable and Efficient Adversarial Training through Local Linearization

10/11/2022
by   Zhuorong Li, et al.
0

There has been a recent surge in single-step adversarial training as it shows robustness and efficiency. However, a phenomenon referred to as “catastrophic overfitting" has been observed, which is prevalent in single-step defenses and may frustrate attempts to use FGSM adversarial training. To address this issue, we propose a novel method, Stable and Efficient Adversarial Training (SEAT), which mitigates catastrophic overfitting by harnessing on local properties that distinguish a robust model from that of a catastrophic overfitted model. The proposed SEAT has strong theoretical justifications, in that minimizing the SEAT loss can be shown to favour smooth empirical risk, thereby leading to robustness. Experimental results demonstrate that the proposed method successfully mitigates catastrophic overfitting, yielding superior performance amongst efficient defenses. Our single-step method can reach 51 accuracy for CIFAR-10 with l_∞ perturbations of radius 8/255 under a strong PGD-50 attack, matching the performance of a 10-step iterative adversarial training at merely 3

READ FULL TEXT
research
10/05/2020

Understanding Catastrophic Overfitting in Single-step Adversarial Training

Adversarial examples are perturbed inputs that are designed to deceive m...
research
11/24/2021

Subspace Adversarial Training

Single-step adversarial training (AT) has received wide attention as it ...
research
08/24/2023

Fast Adversarial Training with Smooth Convergence

Fast adversarial training (FAT) is beneficial for improving the adversar...
research
03/29/2021

ZeroGrad : Mitigating and Explaining Catastrophic Overfitting in FGSM Adversarial Training

Making deep neural networks robust to small adversarial noises has recen...
research
02/23/2023

Investigating Catastrophic Overfitting in Fast Adversarial Training: A Self-fitting Perspective

Although fast adversarial training provides an efficient approach for bu...
research
02/02/2022

Make Some Noise: Reliable and Efficient Single-Step Adversarial Training

Recently, Wong et al. showed that adversarial training with single-step ...
research
06/16/2022

Catastrophic overfitting is a bug but also a feature

Despite clear computational advantages in building robust neural network...

Please sign up or login with your details

Forgot password? Click here to reset