Attention is not Explanation

02/26/2019
by   Sarthak Jain, et al.
0

Attention mechanisms have seen wide adoption in neural NLP models. In addition to improving predictive performance, these are often touted as affording transparency: models equipped with attention provide a distribution over attended-to input units, and this is often presented (at least implicitly) as communicating the relative importance of inputs. However, it is unclear what relationship exists between attention weights and model outputs. In this work, we perform extensive experiments across a variety of NLP tasks that aim to assess the degree to which attention weights provide meaningful `explanations' for predictions. We find that they largely do not. For example, learned attention weights are frequently uncorrelated with gradient-based measures of feature importance, and one can identify very different attention distributions that nonetheless yield equivalent predictions. Our findings show that standard attention modules do not provide meaningful explanations and should not be treated as though they do. Code for all experiments is available at https://github.com/successar/AttentionExplanation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2020

Towards Transparent and Explainable Attention Models

Recent studies on interpretability of attention distributions have led t...
research
01/26/2022

Attention cannot be an Explanation

Attention based explanations (viz. saliency maps), by providing interpre...
research
06/09/2019

Is Attention Interpretable?

Attention mechanisms have recently boosted performance on a range of NLP...
research
05/19/2020

Staying True to Your Word: (How) Can Attention Become Explanation?

The attention mechanism has quickly become ubiquitous in NLP. In additio...
research
04/30/2020

Learning to Faithfully Rationalize by Construction

In many settings it is important for one to be able to understand why a ...
research
05/31/2021

Attention Flows are Shapley Value Explanations

Shapley Values, a solution to the credit assignment problem in cooperati...
research
06/10/2020

Why is Attention Not So Attentive?

Attention-based methods have played an important role in model interpret...

Please sign up or login with your details

Forgot password? Click here to reset