A general framework for scientifically inspired explanations in AI

03/02/2020
by   David Tuckey, et al.
0

Explainability in AI is gaining attention in the computer science community in response to the increasing success of deep learning and the important need of justifying how such systems make predictions in life-critical applications. The focus of explainability in AI has predominantly been on trying to gain insights into how machine learning systems function by exploring relationships between input data and predicted outcomes or by extracting simpler interpretable models. Through literature surveys of philosophy and social science, authors have highlighted the sharp difference between these generated explanations and human-made explanations and claimed that current explanations in AI do not take into account the complexity of human interaction to allow for effective information passing to not-expert users. In this paper we instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented. This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations. We illustrate how we can utilize this framework through two very different examples: an artificial neural network and a Prolog solver and we provide a possible implementation for both examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2020

Explanation-Based Tuning of Opaque Machine Learners with Application to Paper Recommendation

Research in human-centered AI has shown the benefits of machine-learning...
research
12/29/2022

A Theoretical Framework for AI Models Explainability

Explainability is a vibrant research topic in the artificial intelligenc...
research
10/02/2022

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

Despite the proliferation of explainable AI (XAI) methods, little is und...
research
09/06/2019

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

As artificial intelligence and machine learning algorithms make further ...
research
10/14/2019

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

Explaining AI systems is fundamental both to the development of high per...
research
06/10/2021

Brittle AI, Causal Confusion, and Bad Mental Models: Challenges and Successes in the XAI Program

The advances in artificial intelligence enabled by deep learning archite...
research
06/22/2020

Fanoos: Multi-Resolution, Multi-Strength, Interactive Explanations for Learned Systems

Machine learning becomes increasingly important to tune or even synthesi...

Please sign up or login with your details

Forgot password? Click here to reset