Distributive Justice as the Foundational Premise of Fair ML: Unification, Extension, and Interpretation of Group Fairness Metrics

by   Joachim Baumann, et al.

Group fairness metrics are an established way of assessing the fairness of prediction-based decision-making systems. However, these metrics are still insufficiently linked to philosophical theories, and their moral meaning is often unclear. We propose a general framework for analyzing the fairness of decision systems based on theories of distributive justice, encompassing different established "patterns of justice" that correspond to different normative positions. We show that the most popular group fairness metrics can be interpreted as special cases of our approach. Thus, we provide a unifying and interpretative framework for group fairness metrics that reveals the normative choices associated with each of them and that allows understanding their moral substance. At the same time, we provide an extension of the space of possible fairness metrics beyond the ones currently discussed in the fair ML literature. Our framework also allows overcoming several limitations of group fairness metrics that have been criticized in the literature, most notably (1) that they are parity-based, i.e., that they demand some form of equality between groups, which may sometimes be harmful to marginalized groups, (2) that they only compare decisions across groups, but not the resulting consequences for these groups, and (3) that the full breadth of the distributive justice literature is not sufficiently represented.


page 1

page 2

page 3

page 4


Addressing multiple metrics of group fairness in data-driven decision making

The Fairness, Accountability, and Transparency in Machine Learning (FAT-...

People are not coins. Morally distinct types of predictions necessitate different fairness constraints

A recent paper (Hedden 2021) has argued that most of the group fairness ...

The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by default

In recent years fairness in machine learning (ML) has emerged as a highl...

Fairness with Overlapping Groups

In algorithmically fair prediction problems, a standard goal is to ensur...

Equal Confusion Fairness: Measuring Group-Based Disparities in Automated Decision Systems

As artificial intelligence plays an increasingly substantial role in dec...

Distributive Justice and Fairness Metrics in Automated Decision-making: How Much Overlap Is There?

The advent of powerful prediction algorithms led to increased automation...

A Justice-Based Framework for the Analysis of Algorithmic Fairness-Utility Trade-Offs

In prediction-based decision-making systems, different perspectives can ...

Please sign up or login with your details

Forgot password? Click here to reset