Contrastive Learning for Fair Representations

09/22/2021
by   Aili Shen, et al.
0

Trained classification models can unintentionally lead to biased representations and predictions, which can reinforce societal preconceptions and stereotypes. Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise. In this paper, we propose a method for mitigating bias in classifier training by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations, while instances sharing a protected attribute are forced further apart. In such a way our method learns representations which capture the task label in focused regions, while ensuring the protected attribute has diverse spread, and thus has limited impact on prediction and thereby results in fairer models. Extensive experimental results across four tasks in NLP and computer vision show (a) that our proposed method can achieve fairer representations and realises bias reductions compared with competitive baselines; and (b) that it can do so without sacrificing main task performance; (c) that it sets a new state-of-the-art performance in one task despite reducing the bias. Finally, our method is conceptually simple and agnostic to network architectures, and incurs minimal additional compute cost.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/27/2023

FLAC: Fairness-Aware Representation Learning by Suppressing Attribute-Class Associations

Bias in computer vision systems can perpetuate or even amplify discrimin...
research
01/31/2022

Learning Fair Representations via Rate-Distortion Maximization

Text representations learned by machine learning models often encode und...
research
09/18/2022

Through a fair looking-glass: mitigating bias in image datasets

With the recent growth in computer vision applications, the question of ...
research
10/20/2021

Does Data Repair Lead to Fair Models? Curating Contextually Fair Data To Reduce Model Bias

Contextual information is a valuable cue for Deep Neural Networks (DNNs)...
research
10/27/2021

Feature and Label Embedding Spaces Matter in Addressing Image Classifier Bias

This paper strives to address image classifier bias, with a focus on bot...
research
07/02/2018

Debiasing representations by removing unwanted variation due to protected attributes

We propose a regression-based approach to removing implicit biases in re...
research
07/15/2019

AugLabel: Exploiting Word Representations to Augment Labels for Face Attribute Classification

Augmenting data in image space (eg. flipping, cropping etc) and activati...

Please sign up or login with your details

Forgot password? Click here to reset