Is Sparse Attention more Interpretable?

06/02/2021
by   Clara Meister, et al.
0

Sparse attention has been claimed to increase model interpretability under the assumption that it highlights influential inputs. Yet the attention distribution is typically over representations internal to the model rather than the inputs themselves, suggesting this assumption may not have merit. We build on the recent work exploring the interpretability of attention; we design a set of experiments to help us understand how sparsity affects our ability to use attention as an explainability tool. On three text classification tasks, we verify that only a weak relationship between inputs and co-indexed intermediate representations exists – under sparse attention and otherwise. Further, we do not find any plausible mappings from sparse attention distributions to a sparse set of influential inputs through other avenues. Rather, we observe in this setting that inducing sparsity may make it less plausible that attention can be used as a tool for understanding model behavior.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2020

Towards Transparent and Explainable Attention Models

Recent studies on interpretability of attention distributions have led t...
research
05/15/2020

Adaptive Transformers for Learning Multimodal Representations

The usage of transformers has grown from learning about language semanti...
research
12/30/2022

On the Interpretability of Attention Networks

Attention mechanisms form a core component of several successful deep le...
research
09/24/2019

Attention Interpretability Across NLP Tasks

The attention layer in a neural network model provides insights into the...
research
11/07/2019

Transformation of Dense and Sparse Text Representations

Sparsity is regarded as a desirable property of representations, especia...
research
11/10/2020

DoLFIn: Distributions over Latent Features for Interpretability

Interpreting the inner workings of neural models is a key step in ensuri...
research
05/25/2023

Optimization and Interpretability of Graph Attention Networks for Small Sparse Graph Structures in Automotive Applications

For automotive applications, the Graph Attention Network (GAT) is a prom...

Please sign up or login with your details

Forgot password? Click here to reset