NoiLIn: Do Noisy Labels Always Hurt Adversarial Training?

05/31/2021
by   Jingfeng Zhang, et al.
1

Adversarial training (AT) based on minimax optimization is a popular learning style that enhances the model's adversarial robustness. Noisy labels (NL) commonly undermine the learning and hurt the model's performance. Interestingly, both research directions hardly crossover and hit sparks. In this paper, we raise an intriguing question – Does NL always hurt AT? Firstly, we find that NL injection in inner maximization for generating adversarial data augments natural data implicitly, which benefits AT's generalization. Secondly, we find NL injection in outer minimization for the learning serves as regularization that alleviates robust overfitting, which benefits AT's robustness. To enhance AT's adversarial robustness, we propose "NoiLIn" that gradually increases Noisy Labels Injection over the AT's training process. Empirically, NoiLIn answers the previous question negatively – the adversarial robustness can be indeed enhanced by NL injection. Philosophically, we provide a new perspective of the learning with NL: NL should not always be deemed detrimental, and even in the absence of NL in the training set, we may consider injecting it deliberately.

READ FULL TEXT
research
12/15/2021

On the Convergence and Robustness of Adversarial Training

Improving the robustness of deep neural networks (DNNs) to adversarial e...
research
04/25/2020

Improved Adversarial Training via Learned Optimizer

Adversarial attack has recently become a tremendous threat to deep learn...
research
02/26/2020

Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

Adversarial training based on the minimax formulation is necessary for o...
research
01/12/2022

Towards Adversarially Robust Deep Image Denoising

This work systematically investigates the adversarial robustness of deep...
research
02/06/2021

Understanding the Interaction of Adversarial Training with Noisy Labels

Noisy labels (NL) and adversarial examples both undermine trained models...
research
07/14/2023

Omnipotent Adversarial Training for Unknown Label-noisy and Imbalanced Datasets

Adversarial training is an important topic in robust deep learning, but ...
research
06/15/2023

Exact Count of Boundary Pieces of ReLU Classifiers: Towards the Proper Complexity Measure for Classification

Classic learning theory suggests that proper regularization is the key t...

Please sign up or login with your details

Forgot password? Click here to reset