Towards a Measure of Individual Fairness for Deep Learning

09/28/2020
by   Krystal Maughan, et al.
0

Deep learning has produced big advances in artificial intelligence, but trained neural networks often reflect and amplify bias in their training data, and thus produce unfair predictions. We propose a novel measure of individual fairness, called prediction sensitivity, that approximates the extent to which a particular prediction is dependent on a protected attribute. We show how to compute prediction sensitivity using standard automatic differentiation capabilities present in modern deep learning frameworks, and present preliminary empirical results suggesting that prediction sensitivity may be effective for measuring bias in individual predictions.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/30/2020

Towards Auditability for Fairness in Deep Learning

Group fairness metrics can detect when a deep learning model behaves dif...
03/15/2022

Distraction is All You Need for Fairness

With the recent growth in artificial intelligence models and its expandi...
03/16/2022

Measuring Fairness of Text Classifiers via Prediction Sensitivity

With the rapid growth in language processing applications, fairness has ...
02/09/2022

Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

As AI-based systems increasingly impact many areas of our lives, auditin...
09/03/2019

Quantifying Infra-Marginality and Its Trade-off with Group Fairness

In critical decision-making scenarios, optimizing accuracy can lead to a...
11/09/2021

Can Information Flows Suggest Targets for Interventions in Neural Circuits?

Motivated by neuroscientific and clinical applications, we empirically e...
07/21/2021

Leave-one-out Unfairness

We introduce leave-one-out unfairness, which characterizes how likely a ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.