Inherent Tradeoffs in Learning Fair Representation

by   Han Zhao, et al.

With the prevalence of machine learning in high-stakes applications, especially the ones regulated by anti-discrimination laws or societal norms, it is crucial to ensure that the predictive models do not propagate any existing bias or discrimination. Due to the ability of deep neural nets to learn rich representations, recent advances in algorithmic fairness have focused on learning fair representations with adversarial techniques to reduce bias in data while preserving utility simultaneously. In this paper, through the lens of information theory, we provide the first result that quantitatively characterizes the tradeoff between demographic parity and the joint utility across different population groups. Specifically, when the base rates differ between groups, we show that any method aiming to learn fair representation admits an information-theoretic lower bound on the joint error across these groups. To complement our negative results, we also prove that if the optimal decision functions across different groups are close, then learning fair representation leads to an alternative notion of fairness, known as the accuracy parity, which states that the error rates are close between groups. Our theoretical findings are also confirmed empirically on real-world datasets. We believe our insights contribute to better understanding of the tradeoff between utility and different notions of fairness.


page 1

page 2

page 3

page 4


Costs and Benefits of Wasserstein Fair Regression

Real-world applications of machine learning tools in high-stakes domains...

Conditional Learning of Fair Representations

We propose a novel algorithm for learning fair representations that can ...

On Fairness and Calibration

The machine learning community has become increasingly concerned with th...

Fundamental Limits and Tradeoffs in Invariant Representation Learning

Many machine learning applications involve learning representations that...

A survey of bias in Machine Learning through the prism of Statistical Parity for the Adult Data Set

Applications based on Machine Learning models have now become an indispe...

FaiR-N: Fair and Robust Neural Networks for Structured Data

Fairness in machine learning is crucial when individuals are subject to ...

Through the Fairness Lens: Experimental Analysis and Evaluation of Entity Matching

Entity matching (EM) is a challenging problem studied by different commu...

Please sign up or login with your details

Forgot password? Click here to reset