Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI

05/12/2020
by   Sandra Wachter, et al.
3

This article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness. First, we review the evidential requirements to bring a claim under EU non-discrimination law. Due to the disparate nature of algorithmic and human discrimination, the EU's current requirements are too contextual, reliant on intuition, and open to judicial interpretation to be automated. Second, we show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate. Humans discriminate due to negative attitudes (e.g. stereotypes, prejudice) and unintentional biases (e.g. organisational practices or internalised stereotypes) which can act as a signal to victims that discrimination has occurred. Finally, we examine how existing work on fairness in machine learning lines up with procedures for assessing cases under EU non-discrimination law. We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement that aligns with the European Court of Justice's "gold standard." Establishing a standard set of statistical evidence for automated discrimination cases can help ensure consistent procedures for assessment, but not judicial interpretation, of cases involving AI and automated systems. Through this proposal for procedural regularity in the identification and assessment of automated discrimination, we clarify how to build considerations of fairness into automated systems as far as possible while still respecting and enabling the contextual approach to judicial interpretation practiced under EU non-discrimination law. N.B. Abridged abstract

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/02/2022

The Theory of Artificial Immutability: Protecting Algorithmic Groups Under Anti-Discrimination Law

Artificial Intelligence (AI) is increasingly used to make important deci...
07/12/2021

How Could Equality and Data Protection Law Shape AI Fairness for People with Disabilities?

This article examines the concept of 'AI fairness' for people with disab...
02/11/2019

Discrimination in the Age of Algorithms

The law forbids discrimination. But the ambiguity of human decision-maki...
10/15/2020

Towards a Flexible Framework for Algorithmic Fairness

Increasingly, scholars seek to integrate legal and technological insight...
07/30/2020

Visual Analysis of Discrimination in Machine Learning

The growing use of automated decision-making in critical applications, s...
05/17/2022

The Fairness of Machine Learning in Insurance: New Rags for an Old Man?

Since the beginning of their history, insurers have been known to use da...
10/30/2018

An assessment of the first "scientific accreditation" for university appointments in Italy

Nations with non-competitive higher education systems and with high leve...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.