BiasEnsemble: Revisiting the Importance of Amplifying Bias for Debiasing

05/29/2022
by   Jungsoo Lee, et al.
0

In image classification, "debiasing" aims to train a classifier to be less susceptible to dataset bias, the strong correlation between peripheral attributes of data samples and a target class. For example, even if the frog class in the dataset mainly consists of frog images with a swamp background (i.e., bias-aligned samples), a debiased classifier should be able to correctly classify a frog at a beach (i.e., bias-conflicting samples). Recent debiasing approaches commonly use two components for debiasing, a biased model f_B and a debiased model f_D. f_B is trained to focus on bias-aligned samples while f_D is mainly trained with bias-conflicting samples by concentrating on samples which f_B fails to learn, leading f_D to be less susceptible to the dataset bias. While the state-of-the-art debiasing techniques have aimed to better train f_D, we focus on training f_B, an overlooked component until now. Our empirical analysis reveals that removing the bias-conflicting samples from the training set for f_B is important for improving the debiasing performance of f_D. This is due to the fact that the bias-conflicting samples work as noisy samples for amplifying the bias for f_B. To this end, we propose a novel biased sample selection method BiasEnsemble which removes the bias-conflicting samples via leveraging additional biased models to construct a bias-amplified dataset for training f_B. Our simple yet effective approach can be directly applied to existing reweighting-based debiasing approaches, obtaining consistent performance boost and achieving the state-of-the-art performance on both synthetic and real-world datasets.

READ FULL TEXT

page 5

page 6

page 13

page 18

research
07/03/2021

Learning Debiased Representation via Disentangled Feature Augmentation

Image classification models tend to make decisions based on peripheral a...
research
05/06/2023

Echoes: Unsupervised Debiasing via Pseudo-bias Labeling in an Echo Chamber

Neural networks often learn spurious correlations when exposed to biased...
research
05/31/2022

Mitigating Dataset Bias by Using Per-sample Gradient

The performance of deep neural networks is strongly influenced by the tr...
research
12/01/2022

Denoising after Entropy-based Debiasing A Robust Training Method for Dataset Bias with Noisy Labels

Improperly constructed datasets can result in inaccurate inferences. For...
research
12/02/2021

Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation

Despite their remarkable ability to generalize with over-capacity networ...
research
01/15/2020

Stereotypical Bias Removal for Hate Speech Detection Task using Knowledge-based Generalizations

With the ever-increasing cases of hate spread on social media platforms,...
research
06/28/2021

Dataset Bias Mitigation Through Analysis of CNN Training Scores

Training datasets are crucial for convolutional neural network-based alg...

Please sign up or login with your details

Forgot password? Click here to reset