Prepare for the Worst: Generalizing across Domain Shifts with Adversarial Batch Normalization

by   Manli Shu, et al.

Adversarial training is the industry standard for producing models that are robust to small adversarial perturbations. However, machine learning practitioners need models that are robust to domain shifts that occur naturally, such as changes in the style or illumination of input images. Such changes in input distribution have been effectively modeled as shifts in the mean and variance of deep image features. We adapt adversarial training by adversarially perturbing these feature statistics, rather than image pixels, to produce models that are robust to domain shift. We also visualize images from adversarially crafted distributions. Our method, Adversarial Batch Normalization (AdvBN), significantly improves the performance of ResNet-50 on ImageNet-C (+8.1 over standard training practices. In addition, we demonstrate that AdvBN can also improve generalization on semantic segmentation.


page 1

page 3

page 4

page 12


Removing Batch Normalization Boosts Adversarial Training

Adversarial training (AT) defends deep neural networks against adversari...

Learning Representations Robust to Group Shifts and Adversarial Examples

Despite the high performance achieved by deep neural networks on various...

Intriguing generalization and simplicity of adversarially trained neural networks

Adversarial training has been the topic of dozens of studies and a leadi...

Revisiting adapters with adversarial training

While adversarial training is generally used as a defense mechanism, rec...

Annealing Self-Distillation Rectification Improves Adversarial Training

In standard adversarial training, models are optimized to fit one-hot la...

DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization

We consider the problem of OOD generalization, where the goal is to trai...

Please sign up or login with your details

Forgot password? Click here to reset