Fair Visual Recognition in Limited Data Regime using Self-Supervision and Self-Distillation

06/30/2021
by   Pratik Mazumder, et al.
4

Deep learning models generally learn the biases present in the training data. Researchers have proposed several approaches to mitigate such biases and make the model fair. Bias mitigation techniques assume that a sufficiently large number of training examples are present. However, we observe that if the training data is limited, then the effectiveness of bias mitigation methods is severely degraded. In this paper, we propose a novel approach to address this problem. Specifically, we adapt self-supervision and self-distillation to reduce the impact of biases on the model in this setting. Self-supervision and self-distillation are not used for bias mitigation. However, through this work, we demonstrate for the first time that these techniques are very effective in bias mitigation. We empirically show that our approach can significantly reduce the biases learned by the model. Further, we experimentally demonstrate that our approach is complementary to other bias mitigation strategies. Our approach significantly improves their performance and further reduces the model biases in the limited data regime. Specifically, on the L-CIFAR-10S skewed dataset, our approach significantly reduces the bias score of the baseline model by 78.22 of 8.89 domain independent bias mitigation method by 59.26 performance by a significant absolute margin of 7.08

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2019

Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation

Computer vision models learn to perform a task by capturing relevant sta...
research
10/31/2019

Probabilistic Bias Mitigation in Word Embeddings

It has been shown that word embeddings derived from large corpora tend t...
research
08/19/2023

Partition-and-Debias: Agnostic Biases Mitigation via A Mixture of Biases-Specific Experts

Bias mitigation in image classification has been widely researched, and ...
research
09/18/2023

What is a Fair Diffusion Model? Designing Generative Text-To-Image Models to Incorporate Various Worldviews

Generative text-to-image (GTI) models produce high-quality images from s...
research
09/18/2023

Predictive Uncertainty-based Bias Mitigation in Ranking

Societal biases that are contained in retrieved documents have received ...
research
09/16/2022

Less is Better: Recovering Intended-Feature Subspace to Robustify NLU Models

Datasets with significant proportions of bias present threats for traini...
research
08/22/2023

Targeted Data Augmentation for bias mitigation

The development of fair and ethical AI systems requires careful consider...

Please sign up or login with your details

Forgot password? Click here to reset