Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification

05/06/2021
by   George Chrysostomou, et al.
0

Neural network architectures in natural language processing often use attention mechanisms to produce probability distributions over input token representations. Attention has empirically been demonstrated to improve performance in various tasks, while its weights have been extensively used as explanations for model predictions. Recent studies (Jain and Wallace, 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019) have showed that it cannot generally be considered as a faithful explanation (Jacovi and Goldberg, 2020) across encoders and tasks. In this paper, we seek to improve the faithfulness of attention-based explanations for text classification. We achieve this by proposing a new family of Task-Scaling (TaSc) mechanisms that learn task-specific non-contextualised information to scale the original attention weights. Evaluation tests for explanation faithfulness, show that the three proposed variants of TaSc improve attention-based explanations across two attention mechanisms, five encoders and five text classification datasets without sacrificing predictive performance. Finally, we demonstrate that TaSc consistently provides more faithful attention-based explanations compared to three widely-used interpretability techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2022

Attention cannot be an Explanation

Attention based explanations (viz. saliency maps), by providing interpre...
research
09/22/2022

Improving Attention-Based Interpretability of Text Classification Transformers

Transformers are widely used in NLP, where they consistently achieve sta...
research
09/17/2019

Learning to Deceive with Attention-Based Explanations

Attention mechanisms are ubiquitous components in neural architectures a...
research
08/31/2021

Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience

Pretrained transformer-based models such as BERT have demonstrated state...
research
05/07/2021

Order in the Court: Explainable AI Methods Prone to Disagreement

In Natural Language Processing, feature-additive explanation methods qua...
research
07/26/2022

Is Attention Interpretation? A Quantitative Assessment On Sets

The debate around the interpretability of attention mechanisms is center...
research
07/05/2022

Betti numbers of attention graphs is all you really need

We apply methods of topological analysis to the attention graphs, calcul...

Please sign up or login with your details

Forgot password? Click here to reset