DeepAI AI Chat
Log In Sign Up

Robust Weight Perturbation for Adversarial Training

05/30/2022
by   Chaojian Yu, et al.
iie.ac.cn
The University of Sydney
Hong Kong Baptist University
The University of Melbourne
NetEase, Inc
0

Overfitting widely exists in adversarial robust training of deep networks. An effective remedy is adversarial weight perturbation, which injects the worst-case weight perturbation during network training by maximizing the classification loss on adversarial examples. Adversarial weight perturbation helps reduce the robust generalization gap; however, it also undermines the robustness improvement. A criterion that regulates the weight perturbation is therefore crucial for adversarial training. In this paper, we propose such a criterion, namely Loss Stationary Condition (LSC) for constrained perturbation. With LSC, we find that it is essential to conduct weight perturbation on adversarial data with small classification loss to eliminate robust overfitting. Weight perturbation on adversarial data with large classification loss is not necessary and may even lead to poor robustness. Based on these observations, we propose a robust perturbation strategy to constrain the extent of weight perturbation. The perturbation strategy prevents deep networks from overfitting while avoiding the side effect of excessive weight perturbation, significantly improving the robustness of adversarial training. Extensive experiments demonstrate the superiority of the proposed method over the state-of-the-art adversarial training methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/17/2022

Understanding Robust Overfitting of Adversarial Training and Beyond

Robust overfitting widely exists in adversarial training of deep network...
04/13/2020

Revisiting Loss Landscape for Adversarial Robustness

The study on improving the robustness of deep neural networks against ad...
10/04/2022

Strength-Adaptive Adversarial Training

Adversarial training (AT) is proved to reliably improve network's robust...
10/28/2021

CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks

Despite the recent advances of graph neural networks (GNNs) in modeling ...
10/15/2020

Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training

Our goal is to understand why the robustness drops after conducting adve...
12/16/2021

δ-SAM: Sharpness-Aware Minimization with Dynamic Reweighting

Deep neural networks are often overparameterized and may not easily achi...
10/03/2022

Stability Analysis and Generalization Bounds of Adversarial Training

In adversarial machine learning, deep neural networks can fit the advers...