Fair Bayes-Optimal Classifiers Under Predictive Parity

05/15/2022
by   Xianli Zeng, et al.
0

Increasing concerns about disparate effects of AI have motivated a great deal of work on fair machine learning. Existing works mainly focus on independence- and separation-based measures (e.g., demographic parity, equality of opportunity, equalized odds), while sufficiency-based measures such as predictive parity are much less studied. This paper considers predictive parity, which requires equalizing the probability of success given a positive prediction among different protected groups. We prove that, if the overall performances of different groups vary only moderately, all fair Bayes-optimal classifiers under predictive parity are group-wise thresholding rules. Perhaps surprisingly, this may not hold if group performance levels vary widely; in this case we find that predictive parity among protected groups may lead to within-group unfairness. We then propose an algorithm we call FairBayes-DPP, aiming to ensure predictive parity when our condition is satisfied. FairBayes-DPP is an adaptive thresholding algorithm that aims to achieve predictive parity, while also seeking to maximize test accuracy. We provide supporting experiments conducted on synthetic and empirical data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/14/2023

Demographic Parity Inspector: Fairness Audits via the Explanation Space

Even if deployed with the best intentions, machine learning methods can ...
research
06/05/2022

Enforcing Group Fairness in Algorithmic Decision Making: Utility Maximization Under Sufficiency

Binary decision making classifiers are not fair by default. Fairness req...
research
04/12/2023

Maximal Fairness

Fairness in AI has garnered quite some attention in research, and increa...
research
05/21/2020

Fair Classification via Unconstrained Optimization

Achieving the Bayes optimal binary classification rule subject to group ...
research
09/01/2022

Fair learning with Wasserstein barycenters for non-decomposable performance measures

This work provides several fundamental characterizations of the optimal ...
research
02/14/2023

Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection

Algorithmic bias often arises as a result of differential subgroup valid...
research
01/11/2021

Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation

Controlling bias in training datasets is vital for ensuring equal treatm...

Please sign up or login with your details

Forgot password? Click here to reset