The AI Act proposal: a new right to technical interpretability?

03/14/2023
by   Chiara Gallese, et al.
0

The debate about the concept of the so called right to explanation in AI is the subject of a wealth of literature. It has focused, in the legal scholarship, on art. 22 GDPR and, in the technical scholarship, on techniques that help explain the output of a certain model (XAI). The purpose of this work is to investigate if the new provisions introduced by the proposal for a Regulation laying down harmonised rules on artificial intelligence (AI Act), in combination with Convention 108 plus and GDPR, are enough to indicate the existence of a right to technical explainability in the EU legal framework and, if not, whether the EU should include it in its current legislation. This is a preliminary work submitted to the online event organised by the Information Society Law Center and it will be later developed into a full paper.

READ FULL TEXT
research
03/13/2020

Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach

The recent enthusiasm for artificial intelligence (AI) is due principall...
research
09/29/2020

Legal Judgment Prediction (LJP) Amid the Advent of Autonomous AI Legal Reasoning

Legal Judgment Prediction (LJP) is a longstanding and open topic in the ...
research
06/01/2023

Sustainable AI Regulation

This paper suggests that AI regulation needs a shift from trustworthines...
research
11/03/2017

Accountability of AI Under the Law: The Role of Explanation

The ubiquity of systems using artificial intelligence or "AI" has brough...
research
02/05/2023

Regulating ChatGPT and other Large Generative AI Models

Large generative AI models (LGAIMs), such as ChatGPT or Stable Diffusion...
research
07/27/2023

Designing Fiduciary Artificial Intelligence

A fiduciary is a trusted agent that has the legal duty to act with loyal...

Please sign up or login with your details

Forgot password? Click here to reset