Batch Normalization Increases Adversarial Vulnerability: Disentangling Usefulness and Robustness of Model Features

10/07/2020
by   Philipp Benz, et al.
0

Batch normalization (BN) has been widely used in modern deep neural networks (DNNs) due to fast convergence. BN is observed to increase the model accuracy while at the cost of adversarial robustness. We conjecture that the increased adversarial vulnerability is caused by BN shifting the model to rely more on non-robust features (NRFs). Our exploration finds that other normalization techniques also increase adversarial vulnerability and our conjecture is also supported by analyzing the model corruption robustness and feature transferability. With a classifier DNN defined as a feature set F we propose a framework for disentangling F robust usefulness into F usefulness and F robustness. We adopt a local linearity based metric, termed LIGS, to define and quantify F robustness. Measuring the F robustness with the LIGS provides direct insight on the feature robustness shift independent of usefulness. Moreover, the LIGS trend during the whole training stage sheds light on the order of learned features, i.e. from RFs (robust features) to NRFs, or vice versa. Our work analyzes how BN and other factors influence the DNN from the feature perspective. Prior works mainly adopt accuracy to evaluate their influence regarding F usefulness, while we believe evaluating F robustness is equally important, for which our work fills the gap.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2022

On Fragile Features and Batch Normalization in Adversarial Training

Modern deep learning architecture utilize batch normalization (BN) to st...
research
05/06/2019

Batch Normalization is a Cause of Adversarial Vulnerability

Batch normalization (batch norm) is often used in an attempt to stabiliz...
research
06/19/2020

Towards an Adversarially Robust Normalization Approach

Batch Normalization (BatchNorm) is effective for improving the performan...
research
06/06/2019

Understanding Adversarial Behavior of DNNs by Disentangling Non-Robust and Robust Components in Performance Metric

The vulnerability to slight input perturbations is a worrying yet intrig...
research
04/04/2020

Understanding (Non-)Robust Feature Disentanglement and the Relationship Between Low- and High-Dimensional Adversarial Attacks

Recent work has put forth the hypothesis that adversarial vulnerabilitie...
research
04/14/2022

Q-TART: Quickly Training for Adversarial Robustness and in-Transferability

Raw deep neural network (DNN) performance is not enough; in real-world s...
research
12/28/2022

Publishing Efficient On-device Models Increases Adversarial Vulnerability

Recent increases in the computational demands of deep neural networks (D...

Please sign up or login with your details

Forgot password? Click here to reset