Fairness in Risk Assessment Instruments: Post-Processing to Achieve Counterfactual Equalized Odds

09/07/2020
by   Alan Mishler, et al.
0

Algorithmic fairness is a topic of increasing concern both within research communities and among the general public. Conventional fairness criteria place restrictions on the joint distribution of a sensitive feature A, an outcome Y, and a predictor S. For example, the criterion of equalized odds requires that S be conditionally independent of A given Y, or equivalently, when all three variables are binary, that the false positive and false negative rates of the predictor be the same for two levels of A. However, fairness criteria based around observable Y are misleading when applied to Risk Assessment Instruments (RAIs), such as predictors designed to estimate the risk of recidivism or child neglect. It has been argued instead that RAIs ought to be trained and evaluated with respect to potential outcomes Y^0. Here, Y^0 represents the outcome that would be observed under no intervention–for example, whether recidivism would occur if a defendant were to be released pretrial. In this paper, we develop a method to post-process an existing binary predictor to satisfy approximate counterfactual equalized odds, which requires S to be nearly conditionally independent of A given Y^0, within a tolerance specified by the user. Our predictor converges to an optimal fair predictor at √(n) rates under appropriate assumptions. We propose doubly robust estimators of the risk and fairness properties of a fixed post-processed predictor, and we show that they are √(n)-consistent and asymptotically normal under appropriate assumptions.

READ FULL TEXT
research
02/10/2022

Fair When Trained, Unfair When Deployed: Observable Fairness Measures are Unstable in Performative Prediction Settings

Many popular algorithmic fairness measures depend on the joint distribut...
research
03/01/2020

When the Oracle Misleads: Modeling the Consequences of Using Observable Rather than Potential Outcomes in Risk Assessment Instruments

Machine learning-based Risk Assessment Instruments are increasingly wide...
research
02/28/2017

Fair prediction with disparate impact: A study of bias in recidivism prediction instruments

Recidivism prediction instruments (RPI's) provide decision makers with a...
research
09/01/2021

FADE: FAir Double Ensemble Learning for Observable and Counterfactual Outcomes

Methods for building fair predictors often involve tradeoffs between fai...
research
02/16/2023

Counterfactual Reasoning for Bias Evaluation and Detection in a Fairness under Unawareness setting

Current AI regulations require discarding sensitive features (e.g., gend...
research
06/19/2023

Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness

In this paper, we develop a new criterion, "insufficiently justified dis...
research
11/19/2021

Model-agnostic bias mitigation methods with regressor distribution control for Wasserstein-based fairness metrics

This article is a companion paper to our earlier work Miroshnikov et al....

Please sign up or login with your details

Forgot password? Click here to reset