Attention vs non-attention for a Shapley-based explanation method

04/26/2021
by   Tom Kersten, et al.
0

The field of explainable AI has recently seen an explosion in the number of explanation methods for highly non-linear deep neural networks. The extent to which such methods – that are often proposed and tested in the domain of computer vision – are appropriate to address the explainability challenges in NLP is yet relatively unexplored. In this work, we consider Contextual Decomposition (CD) – a Shapley-based input feature attribution method that has been shown to work well for recurrent NLP models – and we test the extent to which it is useful for models that contain attention operations. To this end, we extend CD to cover the operations necessary for attention-based models. We then compare how long distance subject-verb relationships are processed by models with and without attention, considering a number of different syntactic structures in two different languages: English and Dutch. Our experiments confirm that CD can successfully be applied for attention-based models as well, providing an alternative Shapley-based attribution method for modern neural networks. In particular, using CD, we show that the English and Dutch models demonstrate similar processing behaviour, but that under the hood there are consistent differences between our attention and non-attention models.

READ FULL TEXT

page 8

page 9

research
01/28/2022

Rethinking Attention-Model Explainability through Faithfulness Violation Test

Attention mechanisms are dominating the explainability of deep models. T...
research
01/26/2022

Attention cannot be an Explanation

Attention based explanations (viz. saliency maps), by providing interpre...
research
10/28/2017

Attention-Based Models for Text-Dependent Speaker Verification

Attention-based models have recently shown great performance on a range ...
research
08/12/2019

On the Validity of Self-Attention as Explanation in Transformer Models

Explainability of deep learning systems is a vital requirement for many ...
research
03/14/2022

A Novel Perspective to Look At Attention: Bi-level Attention-based Explainable Topic Modeling for News Classification

Many recent deep learning-based solutions have widely adopted the attent...
research
04/08/2021

Embeddings and Attention in Predictive Modeling

We explore in depth how categorical data can be processed with embedding...
research
08/13/2019

Attention is not not Explanation

Attention mechanisms play a central role in NLP systems, especially with...

Please sign up or login with your details

Forgot password? Click here to reset