On the Moral Justification of Statistical Parity

by   Corinna Hertweck, et al.

A crucial but often neglected aspect of algorithmic fairness is the question of how we justify enforcing a certain fairness metric from a moral perspective. When fairness metrics are defined, they are typically argued for by highlighting their mathematical properties. Rarely are the moral assumptions beneath the metric explained. Our aim in this paper is to consider the moral aspects associated with the statistical fairness criterion of independence (statistical parity). To this end, we consider previous work, which discusses the two worldviews "What You See Is What You Get" (WYSIWYG) and "We're All Equal" (WAE) and by doing so provides some guidance for clarifying the possible assumptions in the design of algorithms. We present an extension of this work, which centers on morality. The most natural moral extension is that independence needs to be fulfilled if and only if differences in predictive features (e.g., ability to perform well on a job, propensity to commit a crime, etc.) between socio-demographic groups are caused by unjust social disparities and measurement errors. Through two counterexamples, we demonstrate that this extension is not universally true. This means that the question of whether independence should be used or not cannot be satisfactorily answered by only considering the justness of differences in the predictive features.


page 1

page 2

page 3

page 4


Group Fairness: Independence Revisited

This paper critically examines arguments against independence, a measure...

On the Cost of Demographic Parity in Influence Maximization

Modeling and shaping how information spreads through a network is a majo...

Fairness-Aware Naive Bayes Classifier for Data with Multiple Sensitive Features

Fairness-aware machine learning seeks to maximise utility in generating ...

Bursting the Burden Bubble? An Assessment of Sharma et al.'s Counterfactual-based Fairness Metric

Machine learning has seen an increase in negative publicity in recent ye...

Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics

Measuring bias is key for better understanding and addressing unfairness...

Reconciling Predictive and Statistical Parity: A Causal Approach

Since the rise of fair machine learning as a critical field of inquiry, ...

Towards Fairness in Classifying Medical Conversations into SOAP Sections

As machine learning algorithms are more widely deployed in healthcare, t...

Please sign up or login with your details

Forgot password? Click here to reset