Mitigating Dataset Bias by Using Per-sample Gradient

05/31/2022
by   Sumyeong Ahn, et al.
0

The performance of deep neural networks is strongly influenced by the training dataset setup. In particular, when attributes having a strong correlation with the target attribute are present, the trained model can provide unintended prejudgments and show significant inference errors (i.e., the dataset bias problem). Various methods have been proposed to mitigate dataset bias, and their emphasis is on weakly correlated samples, called bias-conflicting samples. These methods are based on explicit bias labels involving human or empirical correlation metrics (e.g., training loss). However, such metrics require human costs or have insufficient theoretical explanation. In this study, we propose a debiasing algorithm, called PGD (Per-sample Gradient-based Debiasing), that comprises three steps: (1) training a model on uniform batch sampling, (2) setting the importance of each sample in proportion to the norm of the sample gradient, and (3) training the model using importance-batch sampling, whose probability is obtained in step (2). Compared with existing baselines for various synthetic and real-world datasets, the proposed method showed state-of-the-art accuracy for a the classification task. Furthermore, we describe theoretical understandings about how PGD can mitigate dataset bias.

READ FULL TEXT

page 3

page 14

page 15

page 16

research
05/29/2022

BiasEnsemble: Revisiting the Importance of Amplifying Bias for Debiasing

In image classification, "debiasing" aims to train a classifier to be le...
research
11/19/2019

Carpe Diem, Seize the Samples Uncertain "At the Moment" for Adaptive Batch Selection

The performance of deep neural networks is significantly affected by how...
research
12/01/2022

Denoising after Entropy-based Debiasing A Robust Training Method for Dataset Bias with Noisy Labels

Improperly constructed datasets can result in inaccurate inferences. For...
research
07/03/2021

Learning Debiased Representation via Disentangled Feature Augmentation

Image classification models tend to make decisions based on peripheral a...
research
04/26/2022

Unsupervised Learning of Unbiased Visual Representations

Deep neural networks are known for their inability to learn robust repre...
research
12/05/2022

Breaking the Spurious Causality of Conditional Generation via Fairness Intervention with Corrective Sampling

Trying to capture the sample-label relationship, conditional generative ...
research
09/19/2022

UMIX: Improving Importance Weighting for Subpopulation Shift via Uncertainty-Aware Mixup

Subpopulation shift wildly exists in many real-world machine learning ap...

Please sign up or login with your details

Forgot password? Click here to reset