BiasBed – Rigorous Texture Bias Evaluation

11/23/2022
by   Nikolai Kalischek, et al.
0

The well-documented presence of texture bias in modern convolutional neural networks has led to a plethora of algorithms that promote an emphasis on shape cues, often to support generalization to new domains. Yet, common datasets, benchmarks and general model selection strategies are missing, and there is no agreed, rigorous evaluation protocol. In this paper, we investigate difficulties and limitations when training networks with reduced texture bias. In particular, we also show that proper evaluation and meaningful comparisons between methods are not trivial. We introduce BiasBed, a testbed for texture- and style-biased training, including multiple datasets and a range of existing algorithms. It comes with an extensive evaluation protocol that includes rigorous hypothesis testing to gauge the significance of the results, despite the considerable training instability of some style bias methods. Our extensive experiments, shed new light on the need for careful, statistically founded evaluation protocols for style bias (and beyond). E.g., we find that some algorithms proposed in the literature do not significantly mitigate the impact of style bias at all. With the release of BiasBed, we hope to foster a common understanding of consistent and meaningful comparisons, and consequently faster progress towards learning methods free of texture bias. Code is available at https://github.com/D1noFuzi/BiasBed

READ FULL TEXT

page 3

page 11

page 12

page 13

research
10/12/2020

Shape-Texture Debiased Neural Network Training

Shape and texture are two prominent and complementary cues for recognizi...
research
08/10/2020

Informative Dropout for Robust Representation Learning: A Shape-bias Perspective

Convolutional Neural Networks (CNNs) are known to rely more on local tex...
research
11/04/2021

StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Synthesis

Generating images that fit a given text description using machine learni...
research
06/12/2022

InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness

Humans rely less on spurious correlations and trivial cues, such as text...
research
04/01/2021

An Investigation of Critical Issues in Bias Mitigation Techniques

A critical problem in deep learning is that systems learn inappropriate ...
research
07/02/2020

In Search of Lost Domain Generalization

The goal of domain generalization algorithms is to predict well on distr...
research
01/19/2022

Signal Strength and Noise Drive Feature Preference in CNN Image Classifiers

Feature preference in Convolutional Neural Network (CNN) image classifie...

Please sign up or login with your details

Forgot password? Click here to reset