Subjective fairness: Fairness is in the eye of the beholder

by   Christos Dimitrakakis, et al.

We analyze different notions of fairness in decision making when the underlying model is not known with certainty. We argue that recent notions of fairness in machine learning need to be modified to incorporate uncertainties about model parameters. We introduce the notion of subjective fairness as a suitable candidate for fair Bayesian decision making rules, relate this definition with existing ones, and experimentally demonstrate the inherent accuracy-fairness tradeoff under this definition.



page 1

page 2

page 3

page 4


Fairness as a Program Property

We explore the following question: Is a decision-making program fair, fo...

On Fairness, Diversity and Randomness in Algorithmic Decision Making

Consider a binary decision making process where a single machine learnin...

Statistical Equity: A Fairness Classification Objective

Machine learning systems have been shown to propagate the societal error...

Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning

Deep neural networks are being increasingly used in real world applicati...

Group Fairness Is Not Derivable From Justice: a Mathematical Proof

We argue that an imperfect criminal law procedure cannot be group-fair, ...

Comparing Fairness Criteria Based on Social Outcome

Fairness in algorithmic decision-making processes is attracting increasi...

Who Gets What, According to Whom? An Analysis of Fairness Perceptions in Service Allocation

Algorithmic fairness research has traditionally been linked to the disci...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.