FLAC: Fairness-Aware Representation Learning by Suppressing Attribute-Class Associations

04/27/2023
by   Ioannis Sarridis, et al.
0

Bias in computer vision systems can perpetuate or even amplify discrimination against certain populations. Considering that bias is often introduced by biased visual datasets, many recent research efforts focus on training fair models using such data. However, most of them heavily rely on the availability of protected attribute labels in the dataset, which limits their applicability, while label-unaware approaches, i.e., approaches operating without such labels, exhibit considerably lower performance. To overcome these limitations, this work introduces FLAC, a methodology that minimizes mutual information between the features extracted by the model and a protected attribute, without the use of attribute labels. To do that, FLAC proposes a sampling strategy that highlights underrepresented samples in the dataset, and casts the problem of learning fair representations as a probability matching problem that leverages representations extracted by a bias-capturing classifier. It is theoretically shown that FLAC can indeed lead to fair representations, that are independent of the protected attributes. FLAC surpasses the current state-of-the-art on Biased MNIST, CelebA, and UTKFace, by 29.1 Additionally, FLAC exhibits 2.2 the most challenging samples of ImageNet. Finally, in most experiments, FLAC even outperforms the bias label-aware state-of-the-art methods.

READ FULL TEXT
research
07/07/2020

README: REpresentation learning by fairness-Aware Disentangling MEthod

Fair representation learning aims to encode invariant representation wit...
research
08/01/2022

De-biased Representation Learning for Fairness with Unreliable Labels

Removing bias while keeping all task-relevant information is challenging...
research
09/22/2021

Contrastive Learning for Fair Representations

Trained classification models can unintentionally lead to biased represe...
research
10/20/2021

Does Data Repair Lead to Fair Models? Curating Contextually Fair Data To Reduce Model Bias

Contextual information is a valuable cue for Deep Neural Networks (DNNs)...
research
01/31/2022

Learning Fair Representations via Rate-Distortion Maximization

Text representations learned by machine learning models often encode und...
research
09/18/2022

Through a fair looking-glass: mitigating bias in image datasets

With the recent growth in computer vision applications, the question of ...
research
08/12/2020

Null-sampling for Interpretable and Fair Representations

We propose to learn invariant representations, in the data domain, to ac...

Please sign up or login with your details

Forgot password? Click here to reset