On Interactive Explanations as Non-Monotonic Reasoning

07/30/2022
by   Guilherme Paulino-Passos, et al.
0

Recent work shows issues of consistency with explanations, with methods generating local explanations that seem reasonable instance-wise, but that are inconsistent across instances. This suggests not only that instance-wise explanations can be unreliable, but mainly that, when interacting with a system via multiple inputs, a user may actually lose confidence in the system. To better analyse this issue, in this work we treat explanations as objects that can be subject to reasoning and present a formal model of the interactive scenario between user and system, via sequences of inputs, outputs, and explanations. We argue that explanations can be thought of as committing to some model behaviour (even if only prima facie), suggesting a form of entailment, which, we argue, should be thought of as non-monotonic. This allows: 1) to solve some considered inconsistencies in explanation, such as via a specificity relation; 2) to consider properties from the non-monotonic reasoning literature and discuss their desirability, gaining more insight on the interactive explanation scenario.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/08/2021

Diagnostics-Guided Explanation Generation

Explanations shed light on a machine learning model's rationales and can...
research
05/03/2020

Bayesian Entailment Hypothesis: How Brains Implement Monotonic and Non-monotonic Reasoning

Recent success of Bayesian methods in neuroscience and artificial intell...
research
02/02/2023

Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive reasoning on hypotheses

Many visualizations have been developed for explainable AI (XAI), but th...
research
06/03/2019

Parameterised Complexity for Abduction

Abductive reasoning is a non-monotonic formalism stemming from the work ...
research
06/03/2019

Parameterised Complexity of Abduction in Schaefer's Framework

Abductive reasoning is a non-monotonic formalism stemming from the work ...
research
05/31/2023

Majority Rule: better patching via Self-Consistency

Large Language models (LLMs) can be induced to solve non-trivial problem...
research
05/15/2020

Reliable Local Explanations for Machine Listening

One way to analyse the behaviour of machine learning models is through l...

Please sign up or login with your details

Forgot password? Click here to reset