Distributive Justice and Fairness Metrics in Automated Decision-making: How Much Overlap Is There?

05/04/2021
by   Matthias Kuppler, et al.
0

The advent of powerful prediction algorithms led to increased automation of high-stake decisions regarding the allocation of scarce resources such as government spending and welfare support. This automation bears the risk of perpetuating unwanted discrimination against vulnerable and historically disadvantaged groups. Research on algorithmic discrimination in computer science and other disciplines developed a plethora of fairness metrics to detect and correct discriminatory algorithms. Drawing on robust sociological and philosophical discourse on distributive justice, we identify the limitations and problematic implications of prominent fairness metrics. We show that metrics implementing equality of opportunity only apply when resource allocations are based on deservingness, but fail when allocations should reflect concerns about egalitarianism, sufficiency, and priority. We argue that by cleanly distinguishing between prediction tasks and decision tasks, research on fair machine learning could take better advantage of the rich literature on distributive justice.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/26/2018

Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction

As algorithms are increasingly used to make important decisions that aff...
10/12/2020

Bridging Machine Learning and Mechanism Design towards Algorithmic Fairness

Decision-making systems increasingly orchestrate our world: how to inter...
03/10/2020

Addressing multiple metrics of group fairness in data-driven decision making

The Fairness, Accountability, and Transparency in Machine Learning (FAT-...
01/14/2019

Putting Fairness Principles into Practice: Challenges, Metrics, and Improvements

As more researchers have become aware of and passionate about algorithmi...
08/04/2021

Fairness in Algorithmic Profiling: A German Case Study

Algorithmic profiling is increasingly used in the public sector as a mea...
09/12/2018

Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability

Algorithmic predictions are increasingly used to aid, or in some cases s...
08/02/2017

Fairness-aware machine learning: a perspective

Algorithms learned from data are increasingly used for deciding many asp...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.