Towards Defending Multiple Adversarial Perturbations via Gated Batch Normalization

12/03/2020
by   Aishan Liu, et al.
0

There is now extensive evidence demonstrating that deep neural networks are vulnerable to adversarial examples, motivating the development of defenses against adversarial attacks. However, existing adversarial defenses typically improve model robustness against individual specific perturbation types. Some recent methods improve model robustness against adversarial attacks in multiple ℓ_p balls, but their performance against each perturbation type is still far from satisfactory. To better understand this phenomenon, we propose the multi-domain hypothesis, stating that different types of adversarial perturbations are drawn from different domains. Guided by the multi-domain hypothesis, we propose Gated Batch Normalization (GBN), a novel building block for deep neural networks that improves robustness against multiple perturbation types. GBN consists of a gated sub-network and a multi-branch batch normalization (BN) layer, where the gated sub-network separates different perturbation types, and each BN branch is in charge of a single perturbation type and learns domain-specific statistics for input transformation. Then, features from different branches are aligned as domain-invariant representations for the subsequent layers. We perform extensive evaluations of our approach on MNIST, CIFAR-10, and Tiny-ImageNet, and demonstrate that GBN outperforms previous defense proposals against multiple perturbation types, i.e, ℓ_1, ℓ_2, and ℓ_∞ perturbations, by large margins of 10-20%.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/03/2019

Transfer of Adversarial Robustness Between Perturbation Types

We study the transfer of adversarial robustness of deep neural networks ...
research
09/12/2022

Adaptive Perturbation Generation for Multiple Backdoors Detection

Extensive evidence has demonstrated that deep neural networks (DNNs) are...
research
04/05/2020

Approximate Manifold Defense Against Multiple Adversarial Perturbations

Existing defenses against adversarial attacks are typically tailored to ...
research
02/08/2020

An Empirical Evaluation of Perturbation-based Defenses

Recent work has extensively shown that randomized perturbations of a neu...
research
12/17/2020

On the Limitations of Denoising Strategies as Adversarial Defenses

As adversarial attacks against machine learning models have raised incre...
research
08/16/2021

Exploring Transferable and Robust Adversarial Perturbation Generation from the Perspective of Network Hierarchy

The transferability and robustness of adversarial examples are two pract...
research
06/19/2020

Towards an Adversarially Robust Normalization Approach

Batch Normalization (BatchNorm) is effective for improving the performan...

Please sign up or login with your details

Forgot password? Click here to reset