Robust Weight Perturbation for Adversarial Training

05/30/2022
by   Chaojian Yu, et al.
0

Overfitting widely exists in adversarial robust training of deep networks. An effective remedy is adversarial weight perturbation, which injects the worst-case weight perturbation during network training by maximizing the classification loss on adversarial examples. Adversarial weight perturbation helps reduce the robust generalization gap; however, it also undermines the robustness improvement. A criterion that regulates the weight perturbation is therefore crucial for adversarial training. In this paper, we propose such a criterion, namely Loss Stationary Condition (LSC) for constrained perturbation. With LSC, we find that it is essential to conduct weight perturbation on adversarial data with small classification loss to eliminate robust overfitting. Weight perturbation on adversarial data with large classification loss is not necessary and may even lead to poor robustness. Based on these observations, we propose a robust perturbation strategy to constrain the extent of weight perturbation. The perturbation strategy prevents deep networks from overfitting while avoiding the side effect of excessive weight perturbation, significantly improving the robustness of adversarial training. Extensive experiments demonstrate the superiority of the proposed method over the state-of-the-art adversarial training methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2022

Understanding Robust Overfitting of Adversarial Training and Beyond

Robust overfitting widely exists in adversarial training of deep network...
research
04/13/2020

Revisiting Loss Landscape for Adversarial Robustness

The study on improving the robustness of deep neural networks against ad...
research
10/04/2022

Strength-Adaptive Adversarial Training

Adversarial training (AT) is proved to reliably improve network's robust...
research
10/28/2021

CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks

Despite the recent advances of graph neural networks (GNNs) in modeling ...
research
12/16/2021

δ-SAM: Sharpness-Aware Minimization with Dynamic Reweighting

Deep neural networks are often overparameterized and may not easily achi...
research
04/09/2021

Relating Adversarially Robust Generalization to Flat Minima

Adversarial training (AT) has become the de-facto standard to obtain mod...
research
05/06/2021

Understanding Catastrophic Overfitting in Adversarial Training

Recently, FGSM adversarial training is found to be able to train a robus...

Please sign up or login with your details

Forgot password? Click here to reset