Bias Mimicking: A Simple Sampling Approach for Bias Mitigation

09/30/2022
by   Maan Qraitem, et al.
0

Prior work has shown that Visual Recognition datasets frequently under-represent sensitive groups (Female) within a category (Programmers). This dataset bias can lead to models that learn spurious correlations between class labels and sensitive attributes such as age, gender, or race. Most of the recent methods that address this problem require significant architectural changes or expensive hyper-parameter tuning. Alternatively, data re-sampling baselines from the class imbalance literature (Undersampling, Upweighting), which can often be implemented in a single line of code and often have no hyperparameters, offer a cheaper and more efficient solution. However, we found that some of these baselines were missing from recent bias mitigation benchmarks. In this paper, we show that these simple methods are strikingly competitive with state-of-the-art bias mitigation methods on many datasets. Furthermore, we improve these methods by introducing a new class conditioned sampling method: Bias Mimicking. In cases where the baseline dataset re-sampling methods do not perform well, Bias Mimicking effectively bridges the performance gap and improves the total averaged accuracy of under-represented subgroups by over 3% compared to prior work.

READ FULL TEXT
research
11/26/2019

Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation

Computer vision models learn to perform a task by capturing relevant sta...
research
08/08/2023

From Fake to Real (FFR): A two-stage training pipeline for mitigating spurious correlations with synthetic data

Visual recognition models are prone to learning spurious correlations in...
research
09/16/2022

Less is Better: Recovering Intended-Feature Subspace to Robustify NLU Models

Datasets with significant proportions of bias present threats for traini...
research
02/17/2022

Gradient Based Activations for Accurate Bias-Free Learning

Bias mitigation in machine learning models is imperative, yet challengin...
research
03/29/2023

Implicit Visual Bias Mitigation by Posterior Estimate Sharpening of a Bayesian Neural Network

The fairness of a deep neural network is strongly affected by dataset bi...
research
05/25/2021

Bias in Machine Learning Software: Why? How? What to do?

Increasingly, software is making autonomous decisions in case of crimina...
research
09/16/2021

Balancing out Bias: Achieving Fairness Through Training Reweighting

Bias in natural language processing arises primarily from models learnin...

Please sign up or login with your details

Forgot password? Click here to reset