Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

02/09/2022
by   Krystal Maughan, et al.
0

As AI-based systems increasingly impact many areas of our lives, auditing these systems for fairness is an increasingly high-stakes problem. Traditional group fairness metrics can miss discrimination against individuals and are difficult to apply after deployment. Counterfactual fairness describes an individualized notion of fairness but is even more challenging to evaluate after deployment. We present prediction sensitivity, an approach for continual audit of counterfactual fairness in deployed classifiers. Prediction sensitivity helps answer the question: would this prediction have been different, if this individual had belonged to a different demographic group – for every prediction made by the deployed model. Prediction sensitivity can leverage correlations between protected status and other features and does not require protected status information at prediction time. Our empirical results demonstrate that prediction sensitivity is effective for detecting violations of counterfactual fairness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/16/2022

Measuring Fairness of Text Classifiers via Prediction Sensitivity

With the rapid growth in language processing applications, fairness has ...
research
02/23/2023

Counterfactual Situation Testing: Uncovering Discrimination under Fairness given the Difference

We present counterfactual situation testing (CST), a causal data mining ...
research
08/07/2022

Counterfactual Fairness Is Basically Demographic Parity

Making fair decisions is crucial to ethically implementing machine learn...
research
11/30/2020

Towards Auditability for Fairness in Deep Learning

Group fairness metrics can detect when a deep learning model behaves dif...
research
09/28/2020

Towards a Measure of Individual Fairness for Deep Learning

Deep learning has produced big advances in artificial intelligence, but ...
research
10/13/2022

Walk a Mile in Their Shoes: a New Fairness Criterion for Machine Learning

The old empathetic adage, “Walk a mile in their shoes,” asks that one im...
research
04/09/2023

Information-Theoretic Testing and Debugging of Fairness Defects in Deep Neural Networks

The deep feedforward neural networks (DNNs) are increasingly deployed in...

Please sign up or login with your details

Forgot password? Click here to reset