Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural Networks

10/08/2021
by   Cody Blakeney, et al.
0

Algorithmic bias is of increasing concern, both to the research community, and society at large. Bias in AI is more abstract and unintuitive than traditional forms of discrimination and can be more difficult to detect and mitigate. A clear gap exists in the current literature on evaluating the relative bias in the performance of multi-class classifiers. In this work, we propose two simple yet effective metrics, Combined Error Variance (CEV) and Symmetric Distance Error (SDE), to quantitatively evaluate the class-wise bias of two models in comparison to one another. By evaluating the performance of these new metrics and by demonstrating their practical application, we show that they can be used to measure fairness as well as bias. These demonstrations show that our metrics can address specific needs for measuring bias in multi-class classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2021

Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation

In recent years the ubiquitous deployment of AI has posed great concerns...
research
02/15/2022

Choosing an algorithmic fairness metric for an online marketplace: Detecting and quantifying algorithmic bias on LinkedIn

In this paper, we derive an algorithmic fairness metric for the recommen...
research
10/17/2022

Systematic Evaluation of Predictive Fairness

Mitigating bias in training on biased datasets is an important open prob...
research
05/20/2022

Assessing Demographic Bias Transfer from Dataset to Model: A Case Study in Facial Expression Recognition

The increasing amount of applications of Artificial Intelligence (AI) ha...
research
10/21/2022

Men Also Do Laundry: Multi-Attribute Bias Amplification

As computer vision systems become more widely deployed, there is increas...
research
01/05/2021

Evaluating Fairness in the Presence of Spatial Autocorrelation

Fairness considerations for spatial data often get confounded by the und...
research
04/16/2020

Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection

The ability to control for the kinds of information encoded in neural re...

Please sign up or login with your details

Forgot password? Click here to reset