Adversarially Robust Classifier with Covariate Shift Adaptation

02/09/2021
by   Jay Nandy, et al.
9

Existing adversarially trained models typically perform inference on test examples independently from each other. This mode of testing is unable to handle covariate shift in the test samples. Due to this, the performance of these models often degrades significantly. In this paper, we show that simple adaptive batch normalization (BN) technique that involves re-estimating the batch-normalization parameters during inference, can significantly improve the robustness of these models for any random perturbations, including the Gaussian noise. This simple finding enables us to transform adversarially trained models into randomized smoothing classifiers to produce certified robustness to ℓ_2 noise. We show that we can achieve ℓ_2 certified robustness even for adversarially trained models using ℓ_∞-bounded adversarial examples. We further demonstrate that adaptive BN technique significantly improves robustness against common corruptions, while often enhancing performance against adversarial attacks. This enables us to achieve both adversarial and corruption robustness for the same classifier.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset