A Statistical Test for Probabilistic Fairness

12/09/2020
by   Bahar Taskesen, et al.
0

Algorithms are now routinely used to make consequential decisions that affect human lives. Examples include college admissions, medical interventions or law enforcement. While algorithms empower us to harness all information hidden in vast amounts of data, they may inadvertently amplify existing biases in the available datasets. This concern has sparked increasing interest in fair machine learning, which aims to quantify and mitigate algorithmic discrimination. Indeed, machine learning models should undergo intensive tests to detect algorithmic biases before being deployed at scale. In this paper, we use ideas from the theory of optimal transport to propose a statistical hypothesis test for detecting unfair classifiers. Leveraging the geometry of the feature space, the test statistic quantifies the distance of the empirical distribution supported on the test samples to the manifold of distributions that render a pre-trained classifier fair. We develop a rigorous hypothesis testing mechanism for assessing the probabilistic fairness of any pre-trained logistic classifier, and we show both theoretically as well as empirically that the proposed test is asymptotically correct. In addition, the proposed framework offers interpretability by identifying the most favorable perturbation of the data so that the given classifier becomes fair.

READ FULL TEXT
research
06/02/2021

Testing Group Fairness via Optimal Transport Projections

We present a statistical testing framework to detect if a given machine ...
research
07/10/2020

Evaluating Fairness Using Permutation Tests

Machine learning models are central to people's lives and impact society...
research
02/20/2021

Everything is Relative: Understanding Fairness with Optimal Transport

To study discrimination in automated decision-making systems, scholars h...
research
09/21/2021

Identifying biases in legal data: An algorithmic fairness perspective

The need to address representation biases and sentencing disparities in ...
research
02/22/2021

Coping with Mistreatment in Fair Algorithms

Machine learning actively impacts our everyday life in almost all endeav...
research
02/20/2022

Bayes-Optimal Classifiers under Group Fairness

Machine learning algorithms are becoming integrated into more and more h...
research
03/31/2020

Covariance-Robust Dynamic Watermarking

Attack detection and mitigation strategies for cyberphysical systems (CP...

Please sign up or login with your details

Forgot password? Click here to reset