Using sensitive data to prevent discrimination by AI: Does the GDPR need a new exception?

05/17/2022
by   Marvin van Bekkum, et al.
0

Organisations can use artificial intelligence to make decisions about people for a variety of reasons, for instance, to select the best candidates from many job applications. However, AI systems can have discriminatory effects when used for decision-making. To illustrate, an AI system could reject applications of people with a certain ethnicity, while the organisation did not plan such ethnicity discrimination. But in Europe, an organisation runs into a problem when it wants to assess whether its AI system accidentally leads to ethnicity discrimination: the organisation may not know the applicants' ethnicity. In principle, the GDPR bans the use of certain 'special categories of data' (sometimes called 'sensitive data'), which include data on ethnicity, religion, and sexual preference. The proposal for an AI Act of the European Commission includes a provision that would enable organisations to use of special categories of data for their auditing AI systems. This paper asks whether the GDPR's rules on special categories of personal data hinder the prevention of AI-driven discrimination. We argue that the GDPR does prohibit such use of special category data in many circumstances. We also map out the arguments for and against creating an exception to the GDPR's ban on using special categories of personal data, to enable preventing discrimination by AI systems. The paper discusses European law, but the paper can be relevant outside Europe too, as many policymakers in the world grapple with the tension between privacy and non-discrimination policy.

READ FULL TEXT
research
08/11/2020

Bias and Discrimination in AI: a cross-disciplinary perspective

With the widespread and pervasive use of Artificial Intelligence (AI) fo...
research
05/02/2022

The Theory of Artificial Immutability: Protecting Algorithmic Groups Under Anti-Discrimination Law

Artificial Intelligence (AI) is increasingly used to make important deci...
research
06/08/2022

Fairness in Agreement With European Values: An Interdisciplinary Perspective on AI Regulation

With increasing digitalization, Artificial Intelligence (AI) is becoming...
research
12/09/2022

Regulating Gatekeeper AI and Data: Transparency, Access, and Fairness under the DMA, the GDPR, and beyond

Artificial intelligence is not only increasingly used in business and ad...
research
11/25/2022

The European AI Liability Directives – Critique of a Half-Hearted Approach and Lessons for the Future

The optimal liability framework for AI systems remains an unsolved probl...
research
04/15/2020

Bias in Multimodal AI: Testbed for Fair Automatic Recruitment

The presence of decision-making algorithms in society is rapidly increas...
research
09/12/2020

FairCVtest Demo: Understanding Bias in Multimodal Learning with a Testbed in Fair Automatic Recruitment

With the aim of studying how current multimodal AI algorithms based on h...

Please sign up or login with your details

Forgot password? Click here to reset