DeepAI AI Chat
Log In Sign Up

Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural Networks

by   Cody Blakeney, et al.
Illinois Institute of Technology
Texas State University

Algorithmic bias is of increasing concern, both to the research community, and society at large. Bias in AI is more abstract and unintuitive than traditional forms of discrimination and can be more difficult to detect and mitigate. A clear gap exists in the current literature on evaluating the relative bias in the performance of multi-class classifiers. In this work, we propose two simple yet effective metrics, Combined Error Variance (CEV) and Symmetric Distance Error (SDE), to quantitatively evaluate the class-wise bias of two models in comparison to one another. By evaluating the performance of these new metrics and by demonstrating their practical application, we show that they can be used to measure fairness as well as bias. These demonstrations show that our metrics can address specific needs for measuring bias in multi-class classification.


page 1

page 2

page 3

page 4


Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation

In recent years the ubiquitous deployment of AI has posed great concerns...

Systematic Evaluation of Predictive Fairness

Mitigating bias in training on biased datasets is an important open prob...

Detect, Quantify, and Incorporate Dataset Bias: A Neuroimaging Analysis on 12,207 Individuals

Neuroimaging datasets keep growing in size to address increasingly compl...

Assessing Demographic Bias Transfer from Dataset to Model: A Case Study in Facial Expression Recognition

The increasing amount of applications of Artificial Intelligence (AI) ha...

Evaluating Fairness in the Presence of Spatial Autocorrelation

Fairness considerations for spatial data often get confounded by the und...

Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection

The ability to control for the kinds of information encoded in neural re...