
Group Fairness: Independence Revisited
This paper critically examines arguments against independence, a measure...
read it

Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
Measuring bias is key for better understanding and addressing unfairness...
read it

OmniFair: A Declarative System for ModelAgnostic Group Fairness in Machine Learning
Machine learning (ML) is increasingly being used to make decisions in ou...
read it

Fairness Perception from a NetworkCentric Perspective
Algorithmic fairness is a major concern in recent years as the influence...
read it

Discriminative but Not Discriminatory: A Comparison of Fairness Definitions under Different Worldviews
We mathematically compare three competing definitions of grouplevel non...
read it

An example of prediction which complies with Demographic Parity and equalizes groupwise risks in the context of regression
Let (X, S, Y) ∈ℝ^p ×{1, 2}×ℝ be a triplet following some joint distribut...
read it

Learning Fair Canonical Polyadical Decompositions using a Kernel Independence Criterion
This work proposes to learn fair lowrank tensor decompositions by regul...
read it
On the Moral Justification of Statistical Parity
A crucial but often neglected aspect of algorithmic fairness is the question of how we justify enforcing a certain fairness metric from a moral perspective. When fairness metrics are defined, they are typically argued for by highlighting their mathematical properties. Rarely are the moral assumptions beneath the metric explained. Our aim in this paper is to consider the moral aspects associated with the statistical fairness criterion of independence (statistical parity). To this end, we consider previous work, which discusses the two worldviews "What You See Is What You Get" (WYSIWYG) and "We're All Equal" (WAE) and by doing so provides some guidance for clarifying the possible assumptions in the design of algorithms. We present an extension of this work, which centers on morality. The most natural moral extension is that independence needs to be fulfilled if and only if differences in predictive features (e.g., ability to perform well on a job, propensity to commit a crime, etc.) between sociodemographic groups are caused by unjust social disparities and measurement errors. Through two counterexamples, we demonstrate that this extension is not universally true. This means that the question of whether independence should be used or not cannot be satisfactorily answered by only considering the justness of differences in the predictive features.
READ FULL TEXT
Comments
There are no comments yet.