Human Interpretation of Saliency-based Explanation Over Text

01/27/2022
by   Hendrik Schuff, et al.
14

While a lot of research in explainable AI focuses on producing effective explanations, less work is devoted to the question of how people understand and interpret the explanation. In this work, we focus on this question through a study of saliency-based explanations over textual data. Feature-attribution explanations of text models aim to communicate which parts of the input text were more influential than others towards the model decision. Many current explanation methods, such as gradient-based or Shapley value-based methods, provide measures of importance which are well-understood mathematically. But how does a person receiving the explanation (the explainee) comprehend it? And does their understanding match what the explanation attempted to communicate? We empirically investigate the effect of various factors of the input, the feature-attribution explanation, and visualization procedure, on laypeople's interpretation of the explanation. We query crowdworkers for their interpretation on tasks in English and German, and fit a GAMM model to their responses considering the factors of interest. We find that people often mis-interpret the explanations: superficial and unrelated factors, such as word length, influence the explainees' importance assignment despite the explanation communicating importance directly. We then show that some of this distortion can be attenuated: we propose a method to adjust saliencies based on model estimates of over- and under-perception, and explore bar charts as an alternative to heatmap saliency visualization. We find that both approaches can attenuate the distorting effect of specific factors, leading to better-calibrated understanding of the explanation.

READ FULL TEXT

page 22

page 25

page 30

page 31

page 33

page 34

research
05/04/2023

Neighboring Words Affect Human Interpretation of Saliency Explanations

Word-level saliency explanations ("heat maps over words") are often used...
research
10/01/2021

LEMON: Explainable Entity Matching

State-of-the-art entity matching (EM) methods are hard to interpret, and...
research
06/21/2021

Multivariate Data Explanation by Jumping Emerging Patterns Visualization

Visual Analytics (VA) tools and techniques have shown to be instrumental...
research
03/31/2023

Rethinking interpretation: Input-agnostic saliency mapping of deep visual classifiers

Saliency methods provide post-hoc model interpretation by attributing in...
research
07/04/2022

Fidelity of Ensemble Aggregation for Saliency Map Explanations using Bayesian Optimization Techniques

In recent years, an abundance of feature attribution methods for explain...
research
12/13/2022

On the Relationship Between Explanation and Prediction: A Causal View

Explainability has become a central requirement for the development, dep...
research
04/21/2020

Considering Likelihood in NLP Classification Explanations with Occlusion and Language Modeling

Recently, state-of-the-art NLP models gained an increasing syntactic and...

Please sign up or login with your details

Forgot password? Click here to reset