Training Robust Deep Neural Networks via Adversarial Noise Propagation

09/19/2019
by   Aishan Liu, et al.
0

Deep neural networks have been found vulnerable to noises like adversarial examples and corruption in practice. A number of adversarial defense methods have been developed, which indeed improve the model robustness towards adversarial examples in practice. However, only relying on training with the data mixed with noises, most of them still fail to defend the generalized types of noises. Motivated by the fact that hidden layers play a very important role in maintaining a robust model, this paper comes up with a simple yet powerful training algorithm named Adversarial Noise Propagation (ANP) that injects diversified noises into the hidden layers in a layer-wise manner. We show that ANP can be efficiently implemented by exploiting the nature of the popular backward-forward training style for deep models. To comprehensively understand the behaviors and contributions of hidden layers, we further explore the insights from hidden representation insensitivity and human vision perception alignment. Extensive experiments on MNIST, CIFAR-10, CIFAR-10-C, CIFAR-10-P and ImageNet demonstrate that ANP enables the strong robustness for deep models against the generalized noises including both adversarial and corrupted ones, and significantly outperforms various adversarial defense methods.

READ FULL TEXT

page 7

page 13

research
09/11/2019

Towards Noise-Robust Neural Networks via Progressive Adversarial Training

Adversarial examples, intentionally designed inputs tending to mislead d...
research
08/05/2019

Automated Detection System for Adversarial Examples with High-Frequency Noises Sieve

Deep neural networks are being applied in many tasks with encouraging re...
research
06/09/2022

Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks

Adversarial examples, which are usually generated for specific inputs wi...
research
10/25/2017

mixup: Beyond Empirical Risk Minimization

Large deep neural networks are powerful, but exhibit undesirable behavio...
research
05/24/2018

Laplacian Power Networks: Bounding Indicator Function Smoothness for Adversarial Defense

Deep Neural Networks often suffer from lack of robustness to adversarial...
research
04/28/2017

Parseval Networks: Improving Robustness to Adversarial Examples

We introduce Parseval networks, a form of deep neural networks in which ...
research
04/13/2022

Defensive Patches for Robust Recognition in the Physical World

To operate in real-world high-stakes environments, deep learning systems...

Please sign up or login with your details

Forgot password? Click here to reset