Removing Batch Normalization Boosts Adversarial Training

07/04/2022
by   Haotao Wang, et al.
0

Adversarial training (AT) defends deep neural networks against adversarial attacks. One challenge that limits its practical application is the performance degradation on clean samples. A major bottleneck identified by previous works is the widely used batch normalization (BN), which struggles to model the different statistics of clean and adversarial training samples in AT. Although the dominant approach is to extend BN to capture this mixture of distribution, we propose to completely eliminate this bottleneck by removing all BN layers in AT. Our normalizer-free robust training (NoFrost) method extends recent advances in normalizer-free networks to AT for its unexplored advantage on handling the mixture distribution challenge. We show that NoFrost achieves adversarial robustness with only a minor sacrifice on clean sample accuracy. On ImageNet with ResNet50, NoFrost achieves 74.06% clean accuracy, which drops merely 2.00% from standard training. In contrast, BN-based AT obtains 59.28% clean accuracy, suffering a significant 16.78% drop from standard training. In addition, NoFrost achieves a 23.56% adversarial robustness against PGD attack, which improves the 13.57% robustness in BN-based AT. We observe better model smoothness and larger decision margins from NoFrost, which make the models less sensitive to input perturbations and thus more robust. Moreover, when incorporating more data augmentations into NoFrost, it achieves comprehensive robustness against multiple distribution shifts. Code and pre-trained models are public at https://github.com/amazon-research/normalizer-free-robust-training.

READ FULL TEXT
research
06/10/2019

Intriguing properties of adversarial training

Adversarial training is one of the main defenses against adversarial att...
research
10/10/2022

Revisiting adapters with adversarial training

While adversarial training is generally used as a defense mechanism, rec...
research
09/18/2020

Prepare for the Worst: Generalizing across Domain Shifts with Adversarial Batch Normalization

Adversarial training is the industry standard for producing models that ...
research
02/22/2019

On the Sensitivity of Adversarial Robustness to Input Data Distributions

Neural networks are vulnerable to small adversarial perturbations. Exist...
research
11/25/2020

Advancing diagnostic performance and clinical usability of neural networks via adversarial training and dual batch normalization

Unmasking the decision-making process of machine learning models is esse...
research
06/19/2020

Towards an Adversarially Robust Normalization Approach

Batch Normalization (BatchNorm) is effective for improving the performan...
research
04/16/2021

"BNN - BN = ?": Training Binary Neural Networks without Batch Normalization

Batch normalization (BN) is a key facilitator and considered essential f...

Please sign up or login with your details

Forgot password? Click here to reset