Adversarially Robust Classifier with Covariate Shift Adaptation

02/09/2021 ∙ by Jay Nandy, et al. ∙ 9

Existing adversarially trained models typically perform inference on test examples independently from each other. This mode of testing is unable to handle covariate shift in the test samples. Due to this, the performance of these models often degrades significantly. In this paper, we show that simple adaptive batch normalization (BN) technique that involves re-estimating the batch-normalization parameters during inference, can significantly improve the robustness of these models for any random perturbations, including the Gaussian noise. This simple finding enables us to transform adversarially trained models into randomized smoothing classifiers to produce certified robustness to ℓ_2 noise. We show that we can achieve ℓ_2 certified robustness even for adversarially trained models using ℓ_∞-bounded adversarial examples. We further demonstrate that adaptive BN technique significantly improves robustness against common corruptions, while often enhancing performance against adversarial attacks. This enables us to achieve both adversarial and corruption robustness for the same classifier.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 30

page 31

page 32

page 33

page 34

page 35

page 36

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.