Metrics and methods for a systematic comparison of fairness-aware machine learning algorithms

10/08/2020
by   Gareth P. Jones, et al.
0

Understanding and removing bias from the decisions made by machine learning models is essential to avoid discrimination against unprivileged groups. Despite recent progress in algorithmic fairness, there is still no clear answer as to which bias-mitigation approaches are most effective. Evaluation strategies are typically use-case specific, rely on data with unclear bias, and employ a fixed policy to convert model outputs to decision outcomes. To address these problems, we performed a systematic comparison of a number of popular fairness algorithms applicable to supervised classification. Our study is the most comprehensive of its kind. It utilizes three real and four synthetic datasets, and two different ways of converting model outputs to decisions. It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines, corresponding to both fairness-unaware and fairness-aware algorithms. We found that fairness-unaware algorithms typically fail to produce adequately fair models and that the simplest algorithms are not necessarily the fairest ones. We also found that fairness-aware algorithms can induce fairness without material drops in predictive power. Finally, we found that dataset idiosyncracies (e.g., degree of intrinsic unfairness, nature of correlations) do affect the performance of fairness-aware approaches. Our results allow the practitioner to narrow down the approach(es) they would like to adopt without having to know in advance their fairness requirements.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/04/2021

Modeling Techniques for Machine Learning Fairness: A Survey

Machine learning models are becoming pervasive in high-stakes applicatio...
research
11/04/2019

Auditing and Achieving Intersectional Fairness in Classification Problems

Machine learning algorithms are extensively used to make increasingly mo...
research
02/09/2021

k-Anonymity in Practice: How Generalisation and Suppression Affect Machine Learning Classifiers

The protection of private information is a crucial issue in data-driven ...
research
10/01/2021

A survey on datasets for fairness-aware machine learning

As decision-making increasingly relies on machine learning and (big) dat...
research
09/03/2020

FairXGBoost: Fairness-aware Classification in XGBoost

Highly regulated domains such as finance have long favoured the use of m...
research
09/16/2022

A benchmark study on methods to ensure fair algorithmic decisions for credit scoring

The utility of machine learning in evaluating the creditworthiness of lo...
research
10/11/2022

Navigating Ensemble Configurations for Algorithmic Fairness

Bias mitigators can improve algorithmic fairness in machine learning mod...

Please sign up or login with your details

Forgot password? Click here to reset