Statistical inference for individual fairness

03/30/2021
by   Subha Maity, et al.
0

As we rely on machine learning (ML) models to make more consequential decisions, the issue of ML models perpetuating or even exacerbating undesirable historical biases (e.g., gender and racial biases) has come to the fore of the public's attention. In this paper, we focus on the problem of detecting violations of individual fairness in ML models. We formalize the problem as measuring the susceptibility of ML models against a form of adversarial attack and develop a suite of inference tools for the adversarial cost function. The tools allow auditors to assess the individual fairness of ML models in a statistically-principled way: form confidence intervals for the worst-case performance differential between similar individuals and test hypotheses of model fairness with (asymptotic) non-coverage/Type I error rate control. We demonstrate the utility of our tools in a real-world case study.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

03/11/2020

Auditing ML Models for Individual Bias and Unfairness

We consider the task of auditing ML models for individual bias/unfairnes...
08/05/2021

Reducing Unintended Bias of ML Models on Tabular and Textual Data

Unintended biases in machine learning (ML) models are among the major co...
11/06/2020

There is no trade-off: enforcing fairness can improve accuracy

One of the main barriers to the broader adoption of algorithmic fairness...
11/18/2019

Towards Quantification of Bias in Machine Learning for Healthcare: A Case Study of Renal Failure Prediction

As machine learning (ML) models, trained on real-world datasets, become ...
10/04/2021

Fairness and underspecification in acoustic scene classification: The case for disaggregated evaluations

Underspecification and fairness in machine learning (ML) applications ha...
05/14/2020

Statistical Equity: A Fairness Classification Objective

Machine learning systems have been shown to propagate the societal error...
04/21/2022

A Sandbox Tool to Bias(Stress)-Test Fairness Algorithms

Motivated by the growing importance of reducing unfairness in ML predict...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.