The Case Against Explainability

05/20/2023
by   Hofit Wasserman Rozen, et al.
0

As artificial intelligence (AI) becomes more prevalent there is a growing demand from regulators to accompany decisions made by such systems with explanations. However, a persistent gap exists between the need to execute a meaningful right to explanation vs. the ability of Machine Learning systems to deliver on such a legal requirement. The regulatory appeal towards "a right to explanation" of AI systems can be attributed to the significant role of explanations, part of the notion called reason-giving, in law. Therefore, in this work we examine reason-giving's purposes in law to analyze whether reasons provided by end-user Explainability can adequately fulfill them. We find that reason-giving's legal purposes include: (a) making a better and more just decision, (b) facilitating due-process, (c) authenticating human agency, and (d) enhancing the decision makers' authority. Using this methodology, we demonstrate end-user Explainabilty's inadequacy to fulfil reason-giving's role in law, given reason-giving's functions rely on its impact over a human decision maker. Thus, end-user Explainability fails, or is unsuitable, to fulfil the first, second and third legal function. In contrast we find that end-user Explainability excels in the fourth function, a quality which raises serious risks considering recent end-user Explainability research trends, Large Language Models' capabilities, and the ability to manipulate end-users by both humans and machines. Hence, we suggest that in some cases the right to explanation of AI systems could bring more harm than good to end users. Accordingly, this study carries some important policy ramifications, as it calls upon regulators and Machine Learning practitioners to reconsider the widespread pursuit of end-user Explainability and a right to explanation of AI systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2020

Explanation Ontology: A Model of Explanations for User-Centered AI

Explainability has been a goal for Artificial Intelligence (AI) systems ...
research
11/03/2017

Accountability of AI Under the Law: The Role of Explanation

The ubiquity of systems using artificial intelligence or "AI" has brough...
research
08/25/2023

Meaningful XAI Based on User-Centric Design Methodology

This report first takes stock of XAI-related requirements appearing in v...
research
03/13/2020

Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach

The recent enthusiasm for artificial intelligence (AI) is due principall...
research
03/20/2018

Enslaving the Algorithm: From a "Right to an Explanation" to a "Right to Better Decisions"?

As concerns about unfairness and discrimination in "black box" machine l...
research
09/08/2022

What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components

Explainability techniques for data-driven predictive models based on art...
research
06/11/2019

Creation of User Friendly Datasets: Insights from a Case Study concerning Explanations of Loan Denials

Most explainable AI (XAI) techniques are concerned with the design of al...

Please sign up or login with your details

Forgot password? Click here to reset