Attention cannot be an Explanation

01/26/2022
by   Arjun R Akula, et al.
0

Attention based explanations (viz. saliency maps), by providing interpretability to black box models such as deep neural networks, are assumed to improve human trust and reliance in the underlying models. Recently, it has been shown that attention weights are frequently uncorrelated with gradient-based measures of feature importance. Motivated by this, we ask a follow-up question: "Assuming that we only consider the tasks where attention weights correlate well with feature importance, how effective are these attention based explanations in increasing human trust and reliance in the underlying models?". In other words, can we use attention as an explanation? We perform extensive human study experiments that aim to qualitatively and quantitatively assess the degree to which attention based explanations are suitable in increasing human trust and reliance. Our experiment results show that attention cannot be used as an explanation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2019

Attention is not Explanation

Attention mechanisms have seen wide adoption in neural NLP models. In ad...
research
11/23/2022

MEGAN: Multi-Explanation Graph Attention Network

Explainable artificial intelligence (XAI) methods are expected to improv...
research
05/06/2021

Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification

Neural network architectures in natural language processing often use at...
research
06/10/2020

Why is Attention Not So Attentive?

Attention-based methods have played an important role in model interpret...
research
05/31/2021

Attention Flows are Shapley Value Explanations

Shapley Values, a solution to the credit assignment problem in cooperati...
research
04/26/2021

Attention vs non-attention for a Shapley-based explanation method

The field of explainable AI has recently seen an explosion in the number...
research
09/17/2019

Learning to Deceive with Attention-Based Explanations

Attention mechanisms are ubiquitous components in neural architectures a...

Please sign up or login with your details

Forgot password? Click here to reset