Explaining Explanations to Society

01/19/2019
by   Leilani H. Gilpin, et al.
0

There is a disconnect between explanatory artificial intelligence (XAI) methods and the types of explanations that are useful for and demanded by society (policy makers, government officials, etc.) Questions that experts in artificial intelligence (AI) ask opaque systems provide inside explanations, focused on debugging, reliability, and validation. These are different from those that society will ask of these systems to build trust and confidence in their decisions. Although explanatory AI systems can answer many questions that experts desire, they often don't explain why they made decisions in a way that is precise (true to the model) and understandable to humans. These outside explanations can be used to build trust, comply with regulatory and policy changes, and act as external validation. In this paper, we focus on XAI methods for deep neural networks (DNNs) because of DNNs' use in decision-making and inherent opacity. We explore the types of questions that explanatory DNN systems can answer and discuss challenges in building explanatory systems that provide outside explanations for societal requirements and benefit.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/08/2023

Towards Trust of Explainable AI in Thyroid Nodule Diagnosis

The ability to explain the prediction of deep learning models to end-use...
research
06/05/2023

Towards Better Explanations for Object Detection

Recent advances in Artificial Intelligence (AI) technology have promoted...
research
11/04/2018

Explaining Explanations in AI

Recent work on interpretability in machine learning and AI has focused o...
research
11/25/2021

Non-Asimov Explanations Regulating AI through Transparency

An important part of law and regulation is demanding explanations for ac...
research
01/07/2021

Argument Schemes and Dialogue for Explainable Planning

Artificial Intelligence (AI) is being increasingly deployed in practical...
research
10/26/2021

Provably Robust Model-Centric Explanations for Critical Decision-Making

We recommend using a model-centric, Boolean Satisfiability (SAT) formali...
research
03/07/2021

Expert System Gradient Descent Style Training: Development of a Defensible Artificial Intelligence Technique

Artificial intelligence systems, which are designed with a capability to...

Please sign up or login with your details

Forgot password? Click here to reset