Adversarial Noise Layer: Regularize Neural Network By Adding Noise

05/21/2018
by   Zhonghui You, et al.
0

In this paper, we introduce a novel regularization method called Adversarial Noise Layer (ANL), which significantly improve the CNN's generalization ability by adding adversarial noise in the hidden layers. ANL is easy to implement and can be integrated with most of the CNN-based models. We compared the impact of the different type of noise and visually demonstrate that adversarial noise guide CNNs to learn to extract cleaner feature maps, further reducing the risk of over-fitting. We also conclude that the model trained with ANL is more robust to FGSM and IFGSM attack. Code is available at: https://github.com/youzhonghui/ANL

READ FULL TEXT
research
08/29/2021

DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks

Adversarial training has been proven to be a powerful regularization met...
research
09/17/2020

Large Norms of CNN Layers Do Not Hurt Adversarial Robustness

Since the Lipschitz properties of convolutional neural network (CNN) are...
research
07/14/2020

Patch-wise Attack for Fooling Deep Neural Network

By adding human-imperceptible noise to clean images, the resultant adver...
research
07/22/2017

PatchShuffle Regularization

This paper focuses on regularizing the training of the convolutional neu...
research
06/12/2018

Convolutional Neural Networks for Aircraft Noise Monitoring

Air travel is one of the fastest growing modes of transportation, howeve...
research
05/16/2022

Robust Representation via Dynamic Feature Aggregation

Deep convolutional neural network (CNN) based models are vulnerable to t...
research
03/29/2023

ALUM: Adversarial Data Uncertainty Modeling from Latent Model Uncertainty Compensation

It is critical that the models pay attention not only to accuracy but al...

Please sign up or login with your details

Forgot password? Click here to reset