Robustness properties of Facebook's ResNeXt WSL models

07/17/2019
by   A. Emin Orhan, et al.
1

We investigate the robustness properties of ResNeXt image recognition models trained with billion scale weakly-supervised data (ResNeXt WSL models). These models, recently made public by Facebook AI, were trained on 1B images from Instagram and fine-tuned on ImageNet. We show that these models display an unprecedented degree of robustness against common image corruptions and perturbations, as measured by the ImageNet-C and ImageNet-P benchmarks. The largest of the released models, in particular, achieves state-of-the-art results on both ImageNet-C and ImageNet-P by a large margin. The gains on ImageNet-C and ImageNet-P far outpace the gains on ImageNet validation accuracy, suggesting the former as more useful benchmarks to measure further progress in image recognition. Remarkably, the ResNeXt WSL models even achieve a limited degree of adversarial robustness against state-of-the-art white-box attacks (10-step PGD attacks). However, in contrast to adversarially trained models, the robustness of the ResNeXt WSL models rapidly declines with the number of PGD steps, suggesting that these models do not achieve genuine adversarial robustness. Visualization of the learned features also confirms this conclusion. Finally, we show that although the ResNeXt WSL models are more shape-biased in their predictions than comparable ImageNet-trained models, they still remain much more texture-biased than humans.

READ FULL TEXT

page 6

page 7

research
11/23/2019

Universal Adversarial Perturbations to Understand Robustness of Texture vs. Shape-biased Training

Convolutional Neural Networks (CNNs) used on image classification tasks ...
research
06/20/2019

Improving the robustness of ImageNet classifiers using elements of human visual cognition

We investigate the robustness properties of image recognition models equ...
research
02/13/2020

CEB Improves Model Robustness

We demonstrate that the Conditional Entropy Bottleneck (CEB) can improve...
research
02/02/2023

Effective Robustness against Natural Distribution Shifts for Models with Different Training Data

“Effective robustness” measures the extra out-of-distribution (OOD) robu...
research
04/06/2022

Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations

Neural network classifiers can largely rely on simple spurious features,...
research
11/21/2019

Adversarial Examples Improve Image Recognition

Adversarial examples are commonly viewed as a threat to ConvNets. Here w...
research
05/09/2022

When does dough become a bagel? Analyzing the remaining mistakes on ImageNet

Image classification accuracy on the ImageNet dataset has been a baromet...

Please sign up or login with your details

Forgot password? Click here to reset