
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Recidivism prediction instruments (RPI's) provide decision makers with a...
read it

When the Oracle Misleads: Modeling the Consequences of Using Observable Rather than Potential Outcomes in Risk Assessment Instruments
Machine learningbased Risk Assessment Instruments are increasingly wide...
read it

Tracking and Improving Information in the Service of Fairness
As algorithmic prediction systems have become widespread, fears that the...
read it

From Soft Classifiers to Hard Decisions: How fair can we be?
A popular methodology for building binary decisionmaking classifiers in...
read it

The impossibility of "fairness": a generalized impossibility result for decisions
Various measures can be used to estimate bias or unfairness in a predict...
read it

Learning Fair Classifiers in Online Stochastic Settings
In many real life situations, including job and loan applications, gatek...
read it

Improving Fairness in Criminal Justice Algorithmic Risk Assessments Using Conformal Prediction Sets
Risk assessment algorithms have been correctly criticized for potential ...
read it
Fairness in Risk Assessment Instruments: PostProcessing to Achieve Counterfactual Equalized Odds
Algorithmic fairness is a topic of increasing concern both within research communities and among the general public. Conventional fairness criteria place restrictions on the joint distribution of a sensitive feature A, an outcome Y, and a predictor S. For example, the criterion of equalized odds requires that S be conditionally independent of A given Y, or equivalently, when all three variables are binary, that the false positive and false negative rates of the predictor be the same for two levels of A. However, fairness criteria based around observable Y are misleading when applied to Risk Assessment Instruments (RAIs), such as predictors designed to estimate the risk of recidivism or child neglect. It has been argued instead that RAIs ought to be trained and evaluated with respect to potential outcomes Y^0. Here, Y^0 represents the outcome that would be observed under no intervention–for example, whether recidivism would occur if a defendant were to be released pretrial. In this paper, we develop a method to postprocess an existing binary predictor to satisfy approximate counterfactual equalized odds, which requires S to be nearly conditionally independent of A given Y^0, within a tolerance specified by the user. Our predictor converges to an optimal fair predictor at √(n) rates under appropriate assumptions. We propose doubly robust estimators of the risk and fairness properties of a fixed postprocessed predictor, and we show that they are √(n)consistent and asymptotically normal under appropriate assumptions.
READ FULL TEXT
Comments
There are no comments yet.