The Flawed Foundations of Fair Machine Learning

06/02/2023
by   Robert Lee Poe, et al.
0

The definition and implementation of fairness in automated decisions has been extensively studied by the research community. Yet, there hides fallacious reasoning, misleading assertions, and questionable practices at the foundations of the current fair machine learning paradigm. Those flaws are the result of a failure to understand that the trade-off between statistically accurate outcomes and group similar outcomes exists as independent, external constraint rather than as a subjective manifestation as has been commonly argued. First, we explain that there is only one conception of fairness present in the fair machine learning literature: group similarity of outcomes based on a sensitive attribute where the similarity benefits an underprivileged group. Second, we show that there is, in fact, a trade-off between statistically accurate outcomes and group similar outcomes in any data setting where group disparities exist, and that the trade-off presents an existential threat to the equitable, fair machine learning approach. Third, we introduce a proof-of-concept evaluation to aid researchers and designers in understanding the relationship between statistically accurate outcomes and group similar outcomes. Finally, suggestions for future work aimed at data scientists, legal scholars, and data ethicists that utilize the conceptual and experimental framework described throughout this article are provided.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/14/2019

On the Apparent Conflict Between Individual and Group Fairness

A distinction has been drawn in fair machine learning research between `...
research
03/21/2023

Counterfactually Fair Regression with Double Machine Learning

Counterfactual fairness is an approach to AI fairness that tries to make...
research
02/17/2022

Does the End Justify the Means? On the Moral Justification of Fairness-Aware Machine Learning

Despite an abundance of fairness-aware machine learning (fair-ml) algori...
research
04/21/2022

People are not coins. Morally distinct types of predictions necessitate different fairness constraints

A recent paper (Hedden 2021) has argued that most of the group fairness ...
research
06/04/2018

iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making

People are rated and ranked, towards algorithmic decision making in an i...
research
01/29/2021

Beyond traditional assumptions in fair machine learning

This thesis scrutinizes common assumptions underlying traditional machin...
research
11/22/2019

Fair Multi-party Machine Learning – a Game Theoretic approach

High performance machine learning models have become highly dependent on...

Please sign up or login with your details

Forgot password? Click here to reset