Log In Sign Up

Fair Group-Shared Representations with Normalizing Flows

by   Mattia Cerrato, et al.

The issue of fairness in machine learning stems from the fact that historical data often displays biases against specific groups of people which have been underprivileged in the recent past, or still are. In this context, one of the possible approaches is to employ fair representation learning algorithms which are able to remove biases from data, making groups statistically indistinguishable. In this paper, we instead develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group. This is made possible by training a pair of Normalizing Flow models and constraining them to not remove information about the ground truth by training a ranking or classification model on top of them. The overall, “chained” model is invertible and has a tractable Jacobian, which allows to relate together the probability densities for different groups and “translate” individuals from one group to another. We show experimentally that our methodology is competitive with other fair representation learning algorithms. Furthermore, our algorithm achieves stronger invariance w.r.t. the sensitive attribute.


page 1

page 2

page 3

page 4


Within-group fairness: A guidance for more sound between-group fairness

As they have a vital effect on social decision-making, AI algorithms not...

Fair Normalizing Flows

Fair representation learning is an attractive approach that promises fai...

Fair Interpretable Representation Learning with Correction Vectors

Neural network architectures have been extensively employed in the fair ...

Adversarial Learned Fair Representations using Dampening and Stacking

As more decisions in our daily life become automated, the need to have m...

Sampling Random Group Fair Rankings

In this paper, we consider the problem of randomized group fair ranking ...

Spatio-Temporal Graph Representation Learning for Fraudster Group Detection

Motivated by potential financial gain, companies may hire fraudster grou...

Quota-based debiasing can decrease representation of already underrepresented groups

Many important decisions in societies such as school admissions, hiring,...