Adversarially Robust Classifier with Covariate Shift Adaptation

02/09/2021
by   Jay Nandy, et al.
9

Existing adversarially trained models typically perform inference on test examples independently from each other. This mode of testing is unable to handle covariate shift in the test samples. Due to this, the performance of these models often degrades significantly. In this paper, we show that simple adaptive batch normalization (BN) technique that involves re-estimating the batch-normalization parameters during inference, can significantly improve the robustness of these models for any random perturbations, including the Gaussian noise. This simple finding enables us to transform adversarially trained models into randomized smoothing classifiers to produce certified robustness to ℓ_2 noise. We show that we can achieve ℓ_2 certified robustness even for adversarially trained models using ℓ_∞-bounded adversarial examples. We further demonstrate that adaptive BN technique significantly improves robustness against common corruptions, while often enhancing performance against adversarial attacks. This enables us to achieve both adversarial and corruption robustness for the same classifier.

READ FULL TEXT

page 7

page 30

page 31

page 32

page 33

page 34

page 35

page 36

research
10/25/2019

Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?

Adversarial training is one of the strongest defenses against adversaria...
research
06/19/2020

Evaluating Prediction-Time Batch Normalization for Robustness under Covariate Shift

Covariate shift has been shown to sharply degrade both predictive accura...
research
04/06/2022

Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations

Neural network classifiers can largely rely on simple spurious features,...
research
05/19/2020

Enhancing Certified Robustness of Smoothed Classifiers via Weighted Model Ensembling

Randomized smoothing has achieved state-of-the-art certified robustness ...
research
11/25/2020

Advancing diagnostic performance and clinical usability of neural networks via adversarial training and dual batch normalization

Unmasking the decision-making process of machine learning models is esse...
research
06/30/2020

Improving robustness against common corruptions by covariate shift adaptation

Today's state-of-the-art machine vision models are vulnerable to image c...
research
09/25/2019

Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks

It has been widely recognized that adversarial examples can be easily cr...

Please sign up or login with your details

Forgot password? Click here to reset