Subjective fairness: Fairness is in the eye of the beholder

05/31/2017
by   Christos Dimitrakakis, et al.
0

We analyze different notions of fairness in decision making when the underlying model is not known with certainty. We argue that recent notions of fairness in machine learning need to be modified to incorporate uncertainties about model parameters. We introduce the notion of subjective fairness as a suitable candidate for fair Bayesian decision making rules, relate this definition with existing ones, and experimentally demonstrate the inherent accuracy-fairness tradeoff under this definition.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

10/19/2016

Fairness as a Program Property

We explore the following question: Is a decision-making program fair, fo...
06/30/2017

On Fairness, Diversity and Randomness in Algorithmic Decision Making

Consider a binary decision making process where a single machine learnin...
05/14/2020

Statistical Equity: A Fairness Classification Objective

Machine learning systems have been shown to propagate the societal error...
06/17/2020

Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning

Deep neural networks are being increasingly used in real world applicati...
02/08/2022

Group Fairness Is Not Derivable From Justice: a Mathematical Proof

We argue that an imperfect criminal law procedure cannot be group-fair, ...
06/13/2018

Comparing Fairness Criteria Based on Social Outcome

Fairness in algorithmic decision-making processes is attracting increasi...
05/10/2021

Who Gets What, According to Whom? An Analysis of Fairness Perceptions in Service Allocation

Algorithmic fairness research has traditionally been linked to the disci...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.