Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI

05/12/2020
by   Sandra Wachter, et al.
3

This article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness. First, we review the evidential requirements to bring a claim under EU non-discrimination law. Due to the disparate nature of algorithmic and human discrimination, the EU's current requirements are too contextual, reliant on intuition, and open to judicial interpretation to be automated. Second, we show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate. Humans discriminate due to negative attitudes (e.g. stereotypes, prejudice) and unintentional biases (e.g. organisational practices or internalised stereotypes) which can act as a signal to victims that discrimination has occurred. Finally, we examine how existing work on fairness in machine learning lines up with procedures for assessing cases under EU non-discrimination law. We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement that aligns with the European Court of Justice's "gold standard." Establishing a standard set of statistical evidence for automated discrimination cases can help ensure consistent procedures for assessment, but not judicial interpretation, of cases involving AI and automated systems. Through this proposal for procedural regularity in the identification and assessment of automated discrimination, we clarify how to build considerations of fairness into automated systems as far as possible while still respecting and enabling the contextual approach to judicial interpretation practiced under EU non-discrimination law. N.B. Abridged abstract

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2023

Algorithmic Unfairness through the Lens of EU Non-Discrimination Law: Or Why the Law is not a Decision Tree

Concerns regarding unfairness and discrimination in the context of artif...
research
06/14/2023

Compatibility of Fairness Metrics with EU Non-Discrimination Laws: Demographic Parity Conditional Demographic Disparity

Empirical evidence suggests that algorithmic decisions driven by Machine...
research
02/11/2019

Discrimination in the Age of Algorithms

The law forbids discrimination. But the ambiguity of human decision-maki...
research
12/09/2022

Regulating Gatekeeper AI and Data: Transparency, Access, and Fairness under the DMA, the GDPR, and beyond

Artificial intelligence is not only increasingly used in business and ad...
research
07/30/2020

Visual Analysis of Discrimination in Machine Learning

The growing use of automated decision-making in critical applications, s...
research
09/10/2018

Automated Test Generation to Detect Individual Discrimination in AI Models

Dependability on AI models is of utmost importance to ensure full accept...
research
10/30/2018

An assessment of the first "scientific accreditation" for university appointments in Italy

Nations with non-competitive higher education systems and with high leve...

Please sign up or login with your details

Forgot password? Click here to reset