Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach

03/13/2020
by   Valérie Beaudouin, et al.
0

The recent enthusiasm for artificial intelligence (AI) is due principally to advances in deep learning. Deep learning methods are remarkably accurate, but also opaque, which limits their potential use in safety-critical applications. To achieve trust and accountability, designers and operators of machine learning algorithms must be able to explain the inner workings, the results and the causes of failures of algorithms to users, regulators, and citizens. The originality of this paper is to combine technical, legal and economic aspects of explainability to develop a framework for defining the "right" level of explain-ability in a given context. We propose three logical steps: First, define the main contextual factors, such as who the audience of the explanation is, the operational context, the level of harm that the system could cause, and the legal/regulatory framework. This step will help characterize the operational and legal needs for explanation, and the corresponding social benefits. Second, examine the technical tools available, including post hoc approaches (input perturbation, saliency maps...) and hybrid AI approaches. Third, as function of the first two steps, choose the right levels of global and local explanation outputs, taking into the account the costs involved. We identify seven kinds of costs and emphasize that explanations are socially useful only when total social benefits exceed costs.

READ FULL TEXT

page 4

page 7

page 10

page 16

page 22

page 25

page 26

page 41

research
03/14/2023

The AI Act proposal: a new right to technical interpretability?

The debate about the concept of the so called right to explanation in AI...
research
01/25/2022

Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts

Existing and planned legislation stipulates various obligations to provi...
research
05/20/2023

The Case Against Explainability

As artificial intelligence (AI) becomes more prevalent there is a growin...
research
11/03/2017

Accountability of AI Under the Law: The Role of Explanation

The ubiquity of systems using artificial intelligence or "AI" has brough...
research
12/07/2017

Network Analysis for Explanation

Safety critical systems strongly require the quality aspects of artifici...
research
06/16/2023

Democratizing Chatbot Debugging: A Computational Framework for Evaluating and Explaining Inappropriate Chatbot Responses

Evaluating and understanding the inappropriateness of chatbot behaviors ...
research
08/25/2023

Meaningful XAI Based on User-Centric Design Methodology

This report first takes stock of XAI-related requirements appearing in v...

Please sign up or login with your details

Forgot password? Click here to reset