Abstraction, Validation, and Generalization for Explainable Artificial Intelligence

05/16/2021
by   Scott Cheng-Hsin Yang, et al.
22

Neural network architectures are achieving superhuman performance on an expanding range of tasks. To effectively and safely deploy these systems, their decision-making must be understandable to a wide range of stakeholders. Methods to explain AI have been proposed to answer this challenge, but a lack of theory impedes the development of systematic abstractions which are necessary for cumulative knowledge gains. We propose Bayesian Teaching as a framework for unifying explainable AI (XAI) by integrating machine learning and human learning. Bayesian Teaching formalizes explanation as a communication act of an explainer to shift the beliefs of an explainee. This formalization decomposes any XAI method into four components: (1) the inference to be explained, (2) the explanatory medium, (3) the explainee model, and (4) the explainer model. The abstraction afforded by Bayesian Teaching to decompose any XAI method elucidates the invariances among them. The decomposition of XAI systems enables modular validation, as each of the first three components listed can be tested semi-independently. This decomposition also promotes generalization through recombination of components from different XAI systems, which facilitates the generation of novel variants. These new variants need not be evaluated one by one provided that each component has been validated, leading to an exponential decrease in development time. Finally, by making the goal of explanation explicit, Bayesian Teaching helps developers to assess how suitable an XAI system is for its intended real-world use case. Thus, Bayesian Teaching provides a theoretical framework that encourages systematic, scientific investigation of XAI.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2021

Mitigating belief projection in explainable artificial intelligence via Bayesian Teaching

State-of-the-art deep-learning systems use decision rules that are chall...
research
06/08/2021

Explainable AI for medical imaging: Explaining pneumothorax diagnoses with Bayesian Teaching

Limited expert time is a key bottleneck in medical imaging. Due to advan...
research
03/15/2019

Online Explanation Generation for Human-Robot Teaming

As Artificial Intelligence (AI) becomes an integral part of our life, th...
research
10/02/2021

Making Things Explainable vs Explaining: Requirements and Challenges under the GDPR

The European Union (EU) through the High-Level Expert Group on Artificia...
research
04/30/2022

Explainable Artificial Intelligence for Bayesian Neural Networks: Towards trustworthy predictions of ocean dynamics

The trustworthiness of neural networks is often challenged because they ...
research
08/11/2021

Logic Explained Networks

The large and still increasing popularity of deep learning clashes with ...
research
05/07/2021

An Enterprise Architecture Framework for E-learning

With a trend toward becoming more and more information and communication...

Please sign up or login with your details

Forgot password? Click here to reset