Towards an Adversarially Robust Normalization Approach

06/19/2020
by   Muhammad Awais, et al.
11

Batch Normalization (BatchNorm) is effective for improving the performance and accelerating the training of deep neural networks. However, it has also shown to be a cause of adversarial vulnerability, i.e., networks without it are more robust to adversarial attacks. In this paper, we investigate how BatchNorm causes this vulnerability and proposed new normalization that is robust to adversarial attacks. We first observe that adversarial images tend to shift the distribution of BatchNorm input, and this shift makes train-time estimated population statistics inaccurate. We hypothesize that these inaccurate statistics make models with BatchNorm more vulnerable to adversarial attacks. We prove our hypothesis by replacing train-time estimated statistics with statistics calculated from the inference-time batch. We found that the adversarial vulnerability of BatchNorm disappears if we use these statistics. However, without estimated batch statistics, we can not use BatchNorm in the practice if large batches of input are not available. To mitigate this, we propose Robust Normalization (RobustNorm); an adversarially robust version of BatchNorm. We experimentally show that models trained with RobustNorm perform better in adversarial settings while retaining all the benefits of BatchNorm. Code is available at <https://github.com/awaisrauf/RobustNorm>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/06/2019

Batch Normalization is a Cause of Adversarial Vulnerability

Batch normalization (batch norm) is often used in an attempt to stabiliz...
research
03/21/2022

Delving into the Estimation Shift of Batch Normalization in a Network

Batch normalization (BN) is a milestone technique in deep learning. It n...
research
07/04/2022

Removing Batch Normalization Boosts Adversarial Training

Adversarial training (AT) defends deep neural networks against adversari...
research
10/07/2020

Batch Normalization Increases Adversarial Vulnerability: Disentangling Usefulness and Robustness of Model Features

Batch normalization (BN) has been widely used in modern deep neural netw...
research
10/11/2022

Understanding the Failure of Batch Normalization for Transformers in NLP

Batch Normalization (BN) is a core and prevalent technique in accelerati...
research
12/03/2020

Towards Defending Multiple Adversarial Perturbations via Gated Batch Normalization

There is now extensive evidence demonstrating that deep neural networks ...
research
10/07/2020

Revisiting Batch Normalization for Improving Corruption Robustness

Modern deep neural networks (DNN) have demonstrated remarkable success i...

Please sign up or login with your details

Forgot password? Click here to reset