Towards Better Model Understanding with Path-Sufficient Explanations

09/13/2021
by   Ronny Luss, et al.
0

Feature based local attribution methods are amongst the most prevalent in explainable artificial intelligence (XAI) literature. Going beyond standard correlation, recently, methods have been proposed that highlight what should be minimally sufficient to justify the classification of an input (viz. pertinent positives). While minimal sufficiency is an attractive property, the resulting explanations are often too sparse for a human to understand and evaluate the local behavior of the model, thus making it difficult to judge its overall quality. To overcome these limitations, we propose a novel method called Path-Sufficient Explanations Method (PSEM) that outputs a sequence of sufficient explanations for a given input of strictly decreasing size (or value) – from original input to a minimally sufficient explanation – which can be thought to trace the local boundary of the model in a smooth manner, thus providing better intuition about the local model behavior for the specific input. We validate these claims, both qualitatively and quantitatively, with experiments that show the benefit of PSEM across all three modalities (image, tabular and text). A user study depicts the strength of the method in communicating the local behavior, where (many) users are able to correctly determine the prediction made by a model.

READ FULL TEXT

page 2

page 7

page 13

page 14

page 15

page 20

page 21

research
11/11/2022

REVEL Framework to measure Local Linear Explanations for black-box models: Deep Learning Image Classification case of study

Explainable artificial intelligence is proposed to provide explanations ...
research
09/18/2023

Evaluation of Human-Understandability of Global Model Explanations using Decision Tree

In explainable artificial intelligence (XAI) research, the predominant f...
research
10/16/2020

A general approach to compute the relevance of middle-level input features

This work proposes a novel general framework, in the context of eXplaina...
research
07/14/2023

Visual Explanations with Attributions and Counterfactuals on Time Series Classification

With the rising necessity of explainable artificial intelligence (XAI), ...
research
05/31/2021

Bounded logit attention: Learning to explain image classifiers

Explainable artificial intelligence is the attempt to elucidate the work...
research
10/01/2021

LEMON: Explainable Entity Matching

State-of-the-art entity matching (EM) methods are hard to interpret, and...

Please sign up or login with your details

Forgot password? Click here to reset