Fairness without the sensitive attribute via Causal Variational Autoencoder

09/10/2021
by   Vincent Grari, et al.
0

In recent years, most fairness strategies in machine learning models focus on mitigating unwanted biases by assuming that the sensitive information is observed. However this is not always possible in practice. Due to privacy purposes and var-ious regulations such as RGPD in EU, many personal sensitive attributes are frequently not collected. We notice a lack of approaches for mitigating bias in such difficult settings, in particular for achieving classical fairness objectives such as Demographic Parity and Equalized Odds. By leveraging recent developments for approximate inference, we propose an approach to fill this gap. Based on a causal graph, we rely on a new variational auto-encoding based framework named SRCVAE to infer a sensitive information proxy, that serve for bias mitigation in an adversarial fairness approach. We empirically demonstrate significant improvements over existing works in the field. We observe that the generated proxy's latent space recovers sensitive information and that our approach achieves a higher accuracy while obtaining the same level of fairness on two real datasets, as measured using com-mon fairness definitions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2022

Estimating and Controlling for Fairness via Sensitive Attribute Predictors

Although machine learning classifiers have been increasingly used in hig...
research
07/24/2023

Fairness Under Demographic Scarce Regime

Most existing works on fairness assume the model has full access to demo...
research
09/07/2020

Learning Unbiased Representations via Rényi Minimization

In recent years, significant work has been done to include fairness cons...
research
11/02/2022

Fair Visual Recognition via Intervention with Proxy Features

Deep learning models often learn to make predictions that rely on sensit...
research
07/07/2022

Enhancing Fairness of Visual Attribute Predictors

The performance of deep neural networks for image recognition tasks such...
research
12/12/2019

Awareness in Practice: Tensions in Access to Sensitive Attribute Data for Antidiscrimination

Organizations cannot address demographic disparities that they cannot se...
research
10/12/2018

Interpretable Fairness via Target Labels in Gaussian Process Models

Addressing fairness in machine learning models has recently attracted a ...

Please sign up or login with your details

Forgot password? Click here to reset