Towards Noise-Robust Neural Networks via Progressive Adversarial Training

09/11/2019
by   Hang Yu, et al.
0

Adversarial examples, intentionally designed inputs tending to mislead deep neural networks, have attracted great attention in the past few years. Although a series of defense strategies have been developed and achieved encouraging model robustness, most of them are still vulnerable to the more commonly witnessed corruptions, e.g., Gaussian noise, blur, etc., in the real world. In this paper, we theoretically and empirically discover the fact that there exists an inherent connection between adversarial robustness and corruption robustness. Based on the fundamental discovery, this paper further proposes a more powerful training method named Progressive Adversarial Training (PAT) that adds diversified adversarial noises progressively during training, and thus obtains robust model against both adversarial examples and corruptions through higher training data complexity. Meanwhile, we also theoretically find that PAT can promise better generalization ability. Experimental evaluation on MNIST, CIFAR-10 and SVHN show that PAT is able to enhance the robustness and generalization of the state-of-the-art network structures, performing comprehensively well compared to various augmentation methods. Moreover, we also propose Mixed Test to evaluate model generalization ability more fairly.

READ FULL TEXT

page 7

page 12

research
09/19/2019

Training Robust Deep Neural Networks via Adversarial Noise Propagation

Deep neural networks have been found vulnerable to noises like adversari...
research
07/21/2022

AugRmixAT: A Data Processing and Training Method for Improving Multiple Robustness and Generalization Performance

Deep neural networks are powerful, but they also have shortcomings such ...
research
03/05/2020

Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization

Adversarial examples cause neural networks to produce incorrect outputs ...
research
07/27/2020

RANDOM MASK: Towards Robust Convolutional Neural Networks

Robustness of neural networks has recently been highlighted by the adver...
research
06/09/2022

Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks

Adversarial examples, which are usually generated for specific inputs wi...
research
06/21/2021

Friendly Training: Neural Networks Can Adapt Data To Make Learning Easier

In the last decade, motivated by the success of Deep Learning, the scien...
research
03/26/2022

A Survey of Robust Adversarial Training in Pattern Recognition: Fundamental, Theory, and Methodologies

In the last a few decades, deep neural networks have achieved remarkable...

Please sign up or login with your details

Forgot password? Click here to reset