Explaining AI as an Exploratory Process: The Peircean Abduction Model

09/30/2020
by   Robert R. Hoffman, et al.
0

Current discussions of "Explainable AI" (XAI) do not much consider the role of abduction in explanatory reasoning (see Mueller, et al., 2018). It might be worthwhile to pursue this, to develop intelligent systems that allow for the observation and analysis of abductive reasoning and the assessment of abductive reasoning as a learnable skill. Abductive inference has been defined in many ways. For example, it has been defined as the achievement of insight. Most often abduction is taken as a single, punctuated act of syllogistic reasoning, like making a deductive or inductive inference from given premises. In contrast, the originator of the concept of abduction—the American scientist/philosopher Charles Sanders Peirce—regarded abduction as an exploratory activity. In this regard, Peirce's insights about reasoning align with conclusions from modern psychological research. Since abduction is often defined as "inferring the best explanation," the challenge of implementing abductive reasoning and the challenge of automating the explanation process are closely linked. We explore these linkages in this report. This analysis provides a theoretical framework for understanding what the XAI researchers are already doing, it explains why some XAI projects are succeeding (or might succeed), and it leads to design advice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/19/2022

Alterfactual Explanations – The Relevance of Irrelevance for Explaining AI Systems

Explanation mechanisms from the field of Counterfactual Thinking are a w...
research
08/15/2019

Abductive Commonsense Reasoning

Abductive reasoning is inference to the most plausible explanation. For ...
research
02/02/2023

Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive reasoning on hypotheses

Many visualizations have been developed for explainable AI (XAI), but th...
research
05/03/2022

Scientific Explanation and Natural Language: A Unified Epistemological-Linguistic Perspective for Explainable AI

A fundamental research goal for Explainable AI (XAI) is to build models ...
research
09/30/2020

Explainable Natural Language Reasoning via Conceptual Unification

This paper presents an abductive framework for multi-hop and interpretab...
research
07/28/2018

Towards Explainable Inference about Object Motion using Qualitative Reasoning

The capability of making explainable inferences regarding physical proce...
research
11/29/2022

Chaining Simultaneous Thoughts for Numerical Reasoning

Given that rich information is hidden behind ubiquitous numbers in text,...

Please sign up or login with your details

Forgot password? Click here to reset