Explaining Explanations to Society

by   Leilani H. Gilpin, et al.

There is a disconnect between explanatory artificial intelligence (XAI) methods and the types of explanations that are useful for and demanded by society (policy makers, government officials, etc.) Questions that experts in artificial intelligence (AI) ask opaque systems provide inside explanations, focused on debugging, reliability, and validation. These are different from those that society will ask of these systems to build trust and confidence in their decisions. Although explanatory AI systems can answer many questions that experts desire, they often don't explain why they made decisions in a way that is precise (true to the model) and understandable to humans. These outside explanations can be used to build trust, comply with regulatory and policy changes, and act as external validation. In this paper, we focus on XAI methods for deep neural networks (DNNs) because of DNNs' use in decision-making and inherent opacity. We explore the types of questions that explanatory DNN systems can answer and discuss challenges in building explanatory systems that provide outside explanations for societal requirements and benefit.


page 1

page 2

page 3

page 4


Towards Trust of Explainable AI in Thyroid Nodule Diagnosis

The ability to explain the prediction of deep learning models to end-use...

Towards Better Explanations for Object Detection

Recent advances in Artificial Intelligence (AI) technology have promoted...

Explaining Explanations in AI

Recent work on interpretability in machine learning and AI has focused o...

Non-Asimov Explanations Regulating AI through Transparency

An important part of law and regulation is demanding explanations for ac...

Argument Schemes and Dialogue for Explainable Planning

Artificial Intelligence (AI) is being increasingly deployed in practical...

Provably Robust Model-Centric Explanations for Critical Decision-Making

We recommend using a model-centric, Boolean Satisfiability (SAT) formali...

Expert System Gradient Descent Style Training: Development of a Defensible Artificial Intelligence Technique

Artificial intelligence systems, which are designed with a capability to...

Please sign up or login with your details

Forgot password? Click here to reset