On the Apparent Conflict Between Individual and Group Fairness

12/14/2019
by   Reuben Binns, et al.
0

A distinction has been drawn in fair machine learning research between `group' and `individual' fairness measures. Many technical research papers assume that both are important, but conflicting, and propose ways to minimise the trade-offs between these measures. This paper argues that this apparent conflict is based on a misconception. It draws on theoretical discussions from within the fair machine learning research, and from political and legal philosophy, to argue that individual and group fairness are not fundamentally in conflict. First, it outlines accounts of egalitarian fairness which encompass plausible motivations for both group and individual fairness, thereby suggesting that there need be no conflict in principle. Second, it considers the concept of individual justice, from legal philosophy and jurisprudence which seems similar but actually contradicts the notion of individual fairness as proposed in the fair machine learning literature. The conclusion is that the apparent conflict between individual and group fairness is more of an artifact of the blunt application of fairness measures, rather than a matter of conflicting principles. In practice, this conflict may be resolved by a nuanced consideration of the sources of `unfairness' in a particular deployment context, and the carefully justified application of measures to mitigate it.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2018

Fairness Under Composition

Much of the literature on fair classifiers considers the case of a singl...
research
06/02/2023

The Flawed Foundations of Fair Machine Learning

The definition and implementation of fairness in automated decisions has...
research
09/24/2020

Legally grounded fairness objectives

Recent work has identified a number of formally incompatible operational...
research
12/01/2022

Beyond Incompatibility: Trade-offs between Mutually Exclusive Fairness Criteria in Machine Learning and Law

Trustworthy AI is becoming ever more important in both machine learning ...
research
08/22/2022

Getting Bored of Cyberwar: Exploring the Role of the Cybercrime Underground in the Russia-Ukraine Conflict

There has been substantial commentary on the role of cyberattacks, hackt...
research
07/14/2022

Insurgency as Complex Network: Image Co-Appearance and Hierarchy in the PKK

Despite a growing recognition of the importance of insurgent group struc...
research
07/09/2023

On The Impact of Machine Learning Randomness on Group Fairness

Statistical measures for group fairness in machine learning reflect the ...

Please sign up or login with your details

Forgot password? Click here to reset