Auditing ML Models for Individual Bias and Unfairness

03/11/2020
by   Songkai Xue, et al.
0

We consider the task of auditing ML models for individual bias/unfairness. We formalize the task in an optimization problem and develop a suite of inferential tools for the optimal value. Our tools permit us to obtain asymptotic confidence intervals and hypothesis tests that cover the target/control the Type I error rate exactly. To demonstrate the utility of our tools, we use them to reveal the gender and racial biases in Northpointe's COMPAS recidivism prediction instrument.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/30/2021

Statistical inference for individual fairness

As we rely on machine learning (ML) models to make more consequential de...
research
12/04/2020

MCMC Confidence Intervals and Biases

The recent paper "Simple confidence intervals for MCMC without CLTs" by ...
research
06/02/2019

Confidence Intervals for Selected Parameters

Practical or scientific considerations often lead to selecting a subset ...
research
07/16/2021

Uncertainty Prediction for Machine Learning Models of Material Properties

Uncertainty quantification in Artificial Intelligence (AI)-based predict...
research
07/30/2016

Double/Debiased Machine Learning for Treatment and Causal Parameters

Most modern supervised statistical/machine learning (ML) methods are exp...
research
11/19/2016

A Bayesian approach to type-specific conic fitting

A perturbative approach is used to quantify the effect of noise in data ...
research
08/17/2022

Privacy Aware Experimentation over Sensitive Groups: A General Chi Square Approach

We study a new privacy model where users belong to certain sensitive gro...

Please sign up or login with your details

Forgot password? Click here to reset