Ensuring Fairness under Prior Probability Shifts

05/06/2020
by   Arpita Biswas, et al.
0

In this paper, we study the problem of fair classification in the presence of prior probability shifts, where the training set distribution differs from the test set. This phenomenon can be observed in the yearly records of several real-world datasets, such as recidivism records and medical expenditure surveys. If unaccounted for, such shifts can cause the predictions of a classifier to become unfair towards specific population subgroups. While the fairness notion called Proportional Equality (PE) accounts for such shifts, a procedure to ensure PE-fairness was unknown. In this work, we propose a method, called CAPE, which provides a comprehensive solution to the aforementioned problem. CAPE makes novel use of prevalence estimation techniques, sampling and an ensemble of classifiers to ensure fair predictions under prior probability shifts. We introduce a metric, called prevalence difference (PD), which CAPE attempts to minimize in order to ensure PE-fairness. We theoretically establish that this metric exhibits several desirable properties. We evaluate the efficacy of CAPE via a thorough empirical evaluation on synthetic datasets. We also compare the performance of CAPE with several popular fair classifiers on real-world datasets like COMPAS (criminal risk assessment) and MEPS (medical expenditure panel survey). The results indicate that CAPE ensures PE-fair predictions, while performing well on other performance metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2022

Transferring Fairness under Distribution Shifts via Fair Consistency Regularization

The increasing reliance on ML models in high-stakes tasks has raised a m...
research
06/10/2019

Learning Fair Naive Bayes Classifiers by Discovering and Eliminating Discrimination Patterns

As machine learning is increasingly used to make real-world decisions, r...
research
06/01/2022

FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms for Neural Networks

Algorithmic decision making driven by neural networks has become very pr...
research
09/13/2018

Fairness-aware Classification: Criterion, Convexity, and Bounds

Fairness-aware classification is receiving increasing attention in the m...
research
06/23/2022

Context matters for fairness – a case study on the effect of spatial distribution shifts

With the ever growing involvement of data-driven AI-based decision makin...
research
06/10/2020

Fair Data Integration

The use of machine learning (ML) in high-stakes societal decisions has e...
research
10/25/2019

Toward a better trade-off between performance and fairness with kernel-based distribution matching

As recent literature has demonstrated how classifiers often carry uninte...

Please sign up or login with your details

Forgot password? Click here to reset