Prepare for the Worst: Generalizing across Domain Shifts with Adversarial Batch Normalization

09/18/2020
by   Manli Shu, et al.
2

Adversarial training is the industry standard for producing models that are robust to small adversarial perturbations. However, machine learning practitioners need models that are robust to domain shifts that occur naturally, such as changes in the style or illumination of input images. Such changes in input distribution have been effectively modeled as shifts in the mean and variance of deep image features. We adapt adversarial training by adversarially perturbing these feature statistics, rather than image pixels, to produce models that are robust to domain shift. We also visualize images from adversarially crafted distributions. Our method, Adversarial Batch Normalization (AdvBN), significantly improves the performance of ResNet-50 on ImageNet-C (+8.1 over standard training practices. In addition, we demonstrate that AdvBN can also improve generalization on semantic segmentation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset