Randomized Adversarial Training via Taylor Expansion

03/19/2023
by   Gaojie Jin, et al.
0

In recent years, there has been an explosion of research into developing more robust deep neural networks against adversarial examples. Adversarial training appears as one of the most successful methods. To deal with both the robustness against adversarial examples and the accuracy over clean examples, many works develop enhanced adversarial training methods to achieve various trade-offs between them. Leveraging over the studies that smoothed update on weights during training may help find flat minima and improve generalization, we suggest reconciling the robustness-accuracy trade-off from another perspective, i.e., by adding random noise into deterministic weights. The randomized weights enable our design of a novel adversarial training method via Taylor expansion of a small Gaussian noise, and we show that the new adversarial training method can flatten loss landscape and find flat minima. With PGD, CW, and Auto Attacks, an extensive set of experiments demonstrate that our method enhances the state-of-the-art adversarial training methods, boosting both robustness and clean accuracy. The code is available at https://github.com/Alexkael/Randomized-Adversarial-Training.

READ FULL TEXT
research
04/13/2020

Revisiting Loss Landscape for Adversarial Robustness

The study on improving the robustness of deep neural networks against ad...
research
12/01/2021

Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness

In response to the threat of adversarial examples, adversarial training ...
research
03/11/2022

Enhancing Adversarial Training with Second-Order Statistics of Weights

Adversarial training has been shown to be one of the most effective appr...
research
03/24/2023

Generalist: Decoupling Natural and Robust Generalization

Deep neural networks obtained by standard training have been constantly ...
research
07/14/2023

Omnipotent Adversarial Training for Unknown Label-noisy and Imbalanced Datasets

Adversarial training is an important topic in robust deep learning, but ...
research
09/01/2021

Towards Improving Adversarial Training of NLP Models

Adversarial training, a method for learning robust deep neural networks,...
research
12/25/2020

Robustness, Privacy, and Generalization of Adversarial Training

Adversarial training can considerably robustify deep neural networks to ...

Please sign up or login with your details

Forgot password? Click here to reset