Understanding the Interaction of Adversarial Training with Noisy Labels

02/06/2021
by   Jianing Zhu, et al.
1

Noisy labels (NL) and adversarial examples both undermine trained models, but interestingly they have hitherto been studied independently. A recent adversarial training (AT) study showed that the number of projected gradient descent (PGD) steps to successfully attack a point (i.e., find an adversarial example in its proximity) is an effective measure of the robustness of this point. Given that natural data are clean, this measure reveals an intrinsic geometric property – how far a point is from its class boundary. Based on this breakthrough, in this paper, we figure out how AT would interact with NL. Firstly, we find if a point is too close to its noisy-class boundary (e.g., one step is enough to attack it), this point is likely to be mislabeled, which suggests to adopt the number of PGD steps as a new criterion for sample selection for correcting NL. Secondly, we confirm AT with strong smoothing effects suffers less from NL (without NL corrections) than standard training (ST), which suggests AT itself is an NL correction. Hence, AT with NL is helpful for improving even the natural accuracy, which again illustrates the superiority of AT as a general-purpose robust learning criterion.

READ FULL TEXT

page 9

page 24

page 25

research
10/05/2020

Geometry-aware Instance-reweighted Adversarial Training

In adversarial machine learning, there was a common belief that robustne...
research
11/23/2020

Learnable Boundary Guided Adversarial Training

Previous adversarial training raises model robustness under the compromi...
research
10/13/2020

Toward Few-step Adversarial Training from a Frequency Perspective

We investigate adversarial-sample generation methods from a frequency do...
research
11/24/2021

Subspace Adversarial Training

Single-step adversarial training (AT) has received wide attention as it ...
research
05/31/2021

NoiLIn: Do Noisy Labels Always Hurt Adversarial Training?

Adversarial training (AT) based on minimax optimization is a popular lea...
research
05/23/2023

Enhancing Accuracy and Robustness through Adversarial Training in Class Incremental Continual Learning

In real life, adversarial attack to deep learning models is a fatal secu...
research
04/28/2022

Improving robustness of language models from a geometry-aware perspective

Recent studies have found that removing the norm-bounded projection and ...

Please sign up or login with your details

Forgot password? Click here to reset