A Systematic Approach to Group Fairness in Automated Decision Making

09/09/2021
by   Corinna Hertweck, et al.
0

While the field of algorithmic fairness has brought forth many ways to measure and improve the fairness of machine learning models, these findings are still not widely used in practice. We suspect that one reason for this is that the field of algorithmic fairness came up with a lot of definitions of fairness, which are difficult to navigate. The goal of this paper is to provide data scientists with an accessible introduction to group fairness metrics and to give some insight into the philosophical reasoning for caring about these metrics. We will do this by considering in which sense socio-demographic groups are compared for making a statement on fairness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2018

Aequitas: A Bias and Fairness Audit Toolkit

Recent work has raised concerns on the risk of unintended bias in algori...
research
03/10/2020

Addressing multiple metrics of group fairness in data-driven decision making

The Fairness, Accountability, and Transparency in Machine Learning (FAT-...
research
05/26/2022

Flexible Group Fairness Metrics for Survival Analysis

Algorithmic fairness is an increasingly important field concerned with d...
research
06/02/2022

Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy

Data-driven predictive algorithms are widely used to automate and guide ...
research
12/02/2020

Towards Fairness in Classifying Medical Conversations into SOAP Sections

As machine learning algorithms are more widely deployed in healthcare, t...
research
06/12/2023

Unprocessing Seven Years of Algorithmic Fairness

Seven years ago, researchers proposed a postprocessing method to equaliz...
research
08/17/2022

Algorithmic Fairness and Statistical Discrimination

Algorithmic fairness is a new interdisciplinary field of study focused o...

Please sign up or login with your details

Forgot password? Click here to reset