On Computing Probabilistic Abductive Explanations

12/12/2022
by   Yacine Izza, et al.
0

The most widely studied explainable AI (XAI) approaches are unsound. This is the case with well-known model-agnostic explanation approaches, and it is also the case with approaches based on saliency maps. One solution is to consider intrinsic interpretability, which does not exhibit the drawback of unsoundness. Unfortunately, intrinsic interpretability can display unwieldy explanation redundancy. Formal explainability represents the alternative to these non-rigorous approaches, with one example being PI-explanations. Unfortunately, PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size. Recently, it has been observed that the (absolute) rigor of PI-explanations can be traded off for a smaller explanation size, by computing the so-called relevant sets. Given some positive δ, a set S of features is δ-relevant if, when the features in S are fixed, the probability of getting the target class exceeds δ. However, even for very simple classifiers, the complexity of computing relevant sets of features is prohibitive, with the decision problem being NPPP-complete for circuit-based classifiers. In contrast with earlier negative results, this paper investigates practical approaches for computing relevant sets for a number of widely used classifiers that include Decision Trees (DTs), Naive Bayes Classifiers (NBCs), and several families of classifiers obtained from propositional languages. Moreover, the paper shows that, in practice, and for these families of classifiers, relevant sets are easy to compute. Furthermore, the experiments confirm that succinct sets of relevant features can be obtained for the families of classifiers considered.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2022

On Computing Relevant Features for Explaining NBCs

Despite the progress observed with model-agnostic explainable AI (XAI), ...
research
06/01/2021

Efficient Explanations With Relevant Sets

Recent work proposed δ-relevant inputs (or sets) as a probabilistic expl...
research
05/19/2022

Provably Precise, Succinct and Efficient Explanations for Decision Trees

Decision trees (DTs) embody interpretable classifiers. DTs have been adv...
research
05/14/2021

SAT-Based Rigorous Explanations for Decision Lists

Decision lists (DLs) find a wide range of uses for classification proble...
research
06/05/2023

From Robustness to Explainability and Back Again

In contrast with ad-hoc methods for eXplainable Artificial Intelligence ...
research
09/18/2020

On the Tractability of SHAP Explanations

SHAP explanations are a popular feature-attribution mechanism for explai...
research
10/07/2021

Cartoon Explanations of Image Classifiers

We present CartoonX (Cartoon Explanation), a novel model-agnostic explan...

Please sign up or login with your details

Forgot password? Click here to reset