Fair Group-Shared Representations with Normalizing Flows

01/17/2022
by   Mattia Cerrato, et al.
0

The issue of fairness in machine learning stems from the fact that historical data often displays biases against specific groups of people which have been underprivileged in the recent past, or still are. In this context, one of the possible approaches is to employ fair representation learning algorithms which are able to remove biases from data, making groups statistically indistinguishable. In this paper, we instead develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group. This is made possible by training a pair of Normalizing Flow models and constraining them to not remove information about the ground truth by training a ranking or classification model on top of them. The overall, “chained” model is invertible and has a tractable Jacobian, which allows to relate together the probability densities for different groups and “translate” individuals from one group to another. We show experimentally that our methodology is competitive with other fair representation learning algorithms. Furthermore, our algorithm achieves stronger invariance w.r.t. the sensitive attribute.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/20/2023

Within-group fairness: A guidance for more sound between-group fairness

As they have a vital effect on social decision-making, AI algorithms not...
research
09/13/2023

Data Augmentation via Subgroup Mixup for Improving Fairness

In this work, we propose data augmentation via pairwise mixup across sub...
research
02/07/2022

Fair Interpretable Representation Learning with Correction Vectors

Neural network architectures have been extensively employed in the fair ...
research
06/10/2021

Fair Normalizing Flows

Fair representation learning is an attractive approach that promises fai...
research
08/21/2023

Fair Rank Aggregation

Ranking algorithms find extensive usage in diverse areas such as web sea...
research
06/15/2020

Learning Smooth and Fair Representations

Organizations that own data face increasing legal liability for its disc...
research
06/13/2020

Quota-based debiasing can decrease representation of already underrepresented groups

Many important decisions in societies such as school admissions, hiring,...

Please sign up or login with your details

Forgot password? Click here to reset