DeepAI AI Chat
Log In Sign Up

Contrastive Learning for Fair Representations

by   Aili Shen, et al.
The University of Melbourne

Trained classification models can unintentionally lead to biased representations and predictions, which can reinforce societal preconceptions and stereotypes. Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise. In this paper, we propose a method for mitigating bias in classifier training by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations, while instances sharing a protected attribute are forced further apart. In such a way our method learns representations which capture the task label in focused regions, while ensuring the protected attribute has diverse spread, and thus has limited impact on prediction and thereby results in fairer models. Extensive experimental results across four tasks in NLP and computer vision show (a) that our proposed method can achieve fairer representations and realises bias reductions compared with competitive baselines; and (b) that it can do so without sacrificing main task performance; (c) that it sets a new state-of-the-art performance in one task despite reducing the bias. Finally, our method is conceptually simple and agnostic to network architectures, and incurs minimal additional compute cost.


page 1

page 2

page 3

page 4


Learning Fair Representations via Rate-Distortion Maximization

Text representations learned by machine learning models often encode und...

Does Data Repair Lead to Fair Models? Curating Contextually Fair Data To Reduce Model Bias

Contextual information is a valuable cue for Deep Neural Networks (DNNs)...

Through a fair looking-glass: mitigating bias in image datasets

With the recent growth in computer vision applications, the question of ...

Diverse Adversaries for Mitigating Bias in Training

Adversarial learning can learn fairer and less biased models of language...

Feature and Label Embedding Spaces Matter in Addressing Image Classifier Bias

This paper strives to address image classifier bias, with a focus on bot...

Debiasing representations by removing unwanted variation due to protected attributes

We propose a regression-based approach to removing implicit biases in re...

Neural Contrastive Clustering: Fully Unsupervised Bias Reduction for Sentiment Classification

Background: Neural networks produce biased classification results due to...