Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness

03/02/2020
by   Ahmadreza Jeddi, et al.
0

While deep neural networks have been achieving state-of-the-art performance across a wide variety of applications, their vulnerability to adversarial attacks limits their widespread deployment for safety-critical applications. Alongside other adversarial defense approaches being investigated, there has been a very recent interest in improving adversarial robustness in deep neural networks through the introduction of perturbations during the training process. However, such methods leverage fixed, pre-defined perturbations and require significant hyper-parameter tuning that makes them very difficult to leverage in a general fashion. In this study, we introduce Learn2Perturb, an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks. More specifically, we introduce novel perturbation-injection modules that are incorporated at each layer to perturb the feature space and increase uncertainty in the network. This feature perturbation is performed at both the training and the inference stages. Furthermore, inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively. Experimental results on CIFAR-10 and CIFAR-100 datasets show that the proposed Learn2Perturb method can result in deep neural networks which are 4-7% more robust on l_∞ FGSM and PDG adversarial attacks and significantly outperforms the state-of-the-art against l_2C&W attack and a wide range of well-known black-box attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/12/2021

Sparse Coding Frontend for Robust Neural Networks

Deep Neural Networks are known to be vulnerable to small, adversarially ...
research
11/18/2020

Self-Gradient Networks

The incredible effectiveness of adversarial attacks on fooling deep neur...
research
07/05/2018

Explainable Learning: Implicit Generative Modelling during Training for Adversarial Robustness

We introduce Explainable Learning ,ExL, an approach for training neural ...
research
04/16/2021

Uncertainty Surrogates for Deep Learning

In this paper we introduce a novel way of estimating prediction uncertai...
research
03/04/2020

Colored Noise Injection for Training Adversarially Robust Neural Networks

Even though deep learning have shown unmatched performance on various ta...
research
01/01/2022

Rethinking Feature Uncertainty in Stochastic Neural Networks for Adversarial Robustness

It is well-known that deep neural networks (DNNs) have shown remarkable ...
research
05/31/2021

Adaptive Feature Alignment for Adversarial Training

Recent studies reveal that Convolutional Neural Networks (CNNs) are typi...

Please sign up or login with your details

Forgot password? Click here to reset