Walk a Mile in Their Shoes: a New Fairness Criterion for Machine Learning

10/13/2022
by   Norman Matloff, et al.
0

The old empathetic adage, “Walk a mile in their shoes,” asks that one imagine the difficulties others may face. This suggests a new ML counterfactual fairness criterion, based on a group level: How would members of a nonprotected group fare if their group were subject to conditions in some protected group? Instead of asking what sentence would a particular Caucasian convict receive if he were Black, take that notion to entire groups; e.g. how would the average sentence for all White convicts change if they were Black, but with their same White characteristics, e.g. same number of prior convictions? We frame the problem and study it empirically, for different datasets. Our approach also is a solution to the problem of covariate correlation with sensitive attributes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2023

Intersectional Fairness: A Fractal Approach

The issue of fairness in AI has received an increasing amount of attenti...
research
02/21/2020

Robust Optimization for Fairness with Noisy Protected Groups

Many existing fairness criteria for machine learning involve equalizing ...
research
09/27/2018

Counterfactual Fairness in Text Classification through Robustness

In this paper, we study counterfactual fairness in text classification, ...
research
02/09/2022

Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

As AI-based systems increasingly impact many areas of our lives, auditin...
research
03/15/2023

DualFair: Fair Representation Learning at Both Group and Individual Levels via Contrastive Self-supervision

Algorithmic fairness has become an important machine learning problem, e...
research
10/24/2019

Almost Politically Acceptable Criminal Justice Risk Assessment

In criminal justice risk forecasting, one can prove that it is impossibl...
research
10/24/2020

Fair Hate Speech Detection through Evaluation of Social Group Counterfactuals

Approaches for mitigating bias in supervised models are designed to redu...

Please sign up or login with your details

Forgot password? Click here to reset