Does enhanced shape bias improve neural network robustness to common corruptions?

04/20/2021
by   Chaithanya Kumar Mummadi, et al.
0

Convolutional neural networks (CNNs) learn to extract representations of complex features, such as object shapes and textures to solve image recognition tasks. Recent work indicates that CNNs trained on ImageNet are biased towards features that encode textures and that these alone are sufficient to generalize to unseen test data from the same distribution as the training data but often fail to generalize to out-of-distribution data. It has been shown that augmenting the training data with different image styles decreases this texture bias in favor of increased shape bias while at the same time improving robustness to common corruptions, such as noise and blur. Commonly, this is interpreted as shape bias increasing corruption robustness. However, this relationship is only hypothesized. We perform a systematic study of different ways of composing inputs based on natural images, explicit edge information, and stylization. While stylization is essential for achieving high corruption robustness, we do not find a clear correlation between shape bias and robustness. We conclude that the data augmentation caused by style-variation accounts for the improved corruption robustness and increased shape bias is only a byproduct.

READ FULL TEXT

page 3

page 13

research
11/29/2018

ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness

Convolutional Neural Networks (CNNs) are commonly thought to recognise o...
research
08/24/2021

StyleAugment: Learning Texture De-biased Representations by Style Augmentation without Pre-defined Textures

Recent powerful vision classifiers are biased towards textures, while sh...
research
11/30/2020

Reducing Textural Bias Improves Robustness of Deep Segmentation CNNs

Despite current advances in deep learning, domain shift remains a common...
research
01/27/2021

Shape or Texture: Understanding Discriminative Features in CNNs

Contrasting the previous evidence that neurons in the later layers of a ...
research
11/14/2022

Robustifying Deep Vision Models Through Shape Sensitization

Recent work has shown that deep vision models tend to be overly dependen...
research
09/15/2018

Neural Networks and Quantifier Conservativity: Does Data Distribution Affect Learnability?

All known natural language determiners are conservative. Psycholinguisti...
research
06/19/2020

Frustratingly Simple Domain Generalization via Image Stylization

Convolutional Neural Networks (CNNs) show impressive performance in the ...

Please sign up or login with your details

Forgot password? Click here to reset