Fair Enough: Searching for Sufficient Measures of Fairness

10/25/2021
by   Suvodeep Majumder, et al.
0

Testing machine learning software for ethical bias has become a pressing current concern. In response, recent research has proposed a plethora of new fairness metrics, for example, the dozens of fairness metrics in the IBM AIF360 toolkit. This raises the question: How can any fairness tool satisfy such a diverse range of goals? While we cannot completely simplify the task of fairness testing, we can certainly reduce the problem. This paper shows that many of those fairness metrics effectively measure the same thing. Based on experiments using seven real-world datasets, we find that (a) 26 classification metrics can be clustered into seven groups, and (b) four dataset metrics can be clustered into three groups. Further, each reduced set may actually predict different things. Hence, it is no longer necessary (or even possible) to satisfy all fairness metrics. In summary, to simplify the fairness testing problem, we recommend the following steps: (1) determine what type of fairness is desirable (and we offer a handful of such types); then (2) lookup those types in our clusters; then (3) just test for one item per cluster. To support that processing, all our scripts (and example datasets) are available at https://github.com/Repoanonymous/Fairness_Metrics.

READ FULL TEXT

page 6

page 8

research
06/15/2023

Harvard Glaucoma Fairness: A Retinal Nerve Disease Dataset for Fairness Learning and Fair Identity Normalization

Fairness in machine learning is important for societal well-being, but l...
research
06/07/2018

Residual Unfairness in Fair Machine Learning from Prejudiced Data

Recent work in fairness in machine learning has proposed adjusting for f...
research
10/03/2018

AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias

Fairness is an increasingly important concern as machine learning models...
research
06/29/2017

New Fairness Metrics for Recommendation that Embrace Differences

We study fairness in collaborative-filtering recommender systems, which ...
research
02/16/2023

Towards Fair Machine Learning Software: Understanding and Addressing Model Bias Through Counterfactual Thinking

The increasing use of Machine Learning (ML) software can lead to unfair ...
research
03/25/2023

Fairness meets Cross-Domain Learning: a new perspective on Models and Metrics

Deep learning-based recognition systems are deployed at scale for severa...
research
01/27/2023

Variance, Self-Consistency, and Arbitrariness in Fair Classification

In fair classification, it is common to train a model, and to compare an...

Please sign up or login with your details

Forgot password? Click here to reset