Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive reasoning on hypotheses

02/02/2023
by   Brian Y. Lim, et al.
0

Many visualizations have been developed for explainable AI (XAI), but they often require further reasoning by users to interpret. We argue that XAI should support abductive reasoning - inference to the best explanation - with diagrammatic reasoning to convey hypothesis generation and evaluation. Inspired by Peircean diagrammatic reasoning and the 5-step abduction process, we propose Diagrammatization, an approach to provide diagrammatic, abductive explanations based on domain hypotheses. We implemented DiagramNet for a clinical application to predict diagnoses from heart auscultation, and explain with shape-based murmur diagrams. In modeling studies, we found that DiagramNet not only provides faithful murmur shape explanations, but also has better prediction performance than baseline models. We further demonstrate the usefulness of diagrammatic explanations in a qualitative user study with medical students, showing that clinically-relevant, diagrammatic explanations are preferred over technical saliency map explanations. This work contributes insights into providing domain-conventional abductive explanations for user-centric XAI.

READ FULL TEXT

page 19

page 31

research
12/28/2021

Towards Relatable Explainable AI with the Perceptual Process

Machine learning models need to provide contrastive explanations, since ...
research
07/19/2022

Alterfactual Explanations – The Relevance of Irrelevance for Explaining AI Systems

Explanation mechanisms from the field of Counterfactual Thinking are a w...
research
07/17/2023

Abductive Reasoning with the GPT-4 Language Model: Case studies from criminal investigation, medical practice, scientific research

This study evaluates the GPT-4 Large Language Model's abductive reasonin...
research
03/27/2013

Towards Solving the Multiple Extension Problem: Combining Defaults and Probabilities

The multiple extension problem arises frequently in diagnostic and defau...
research
02/03/2020

Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study

Convolutional neural networks (CNNs) offer great machine learning perfor...
research
07/30/2022

On Interactive Explanations as Non-Monotonic Reasoning

Recent work shows issues of consistency with explanations, with methods ...
research
09/30/2020

Explaining AI as an Exploratory Process: The Peircean Abduction Model

Current discussions of "Explainable AI" (XAI) do not much consider the r...

Please sign up or login with your details

Forgot password? Click here to reset