On the Interpretability of Attention Networks

12/30/2022
by   Lakshmi Narayan Pandey, et al.
0

Attention mechanisms form a core component of several successful deep learning architectures, and are based on one key idea: ”The output depends only on a small (but unknown) segment of the input.” In several practical applications like image captioning and language translation, this is mostly true. In trained models with an attention mechanism, the outputs of an intermediate module that encodes the segment of input responsible for the output is often used as a way to peek into the `reasoning` of the network. We make such a notion more precise for a variant of the classification problem that we term selective dependence classification (SDC) when used with attention model architectures. Under such a setting, we demonstrate various error modes where an attention model can be accurate but fail to be interpretable, and show that such models do occur as a result of training. We illustrate various situations that can accentuate and mitigate this behaviour. Finally, we use our objective definition of interpretability for SDC tasks to evaluate a few attention model learning algorithms designed to encourage sparsity and demonstrate that these algorithms help improve interpretability.

READ FULL TEXT

page 3

page 7

page 8

page 9

research
11/29/2021

Neural Attention for Image Captioning: Review of Outstanding Methods

Image captioning is the task of automatically generating sentences that ...
research
11/13/2018

Image Captioning Based on a Hierarchical Attention Mechanism and Policy Gradient Optimization

Automatically generating the descriptions of an image, i.e., image capti...
research
09/25/2020

Attention Meets Perturbations: Robust and Interpretable Attention with Adversarial Training

In recent years, deep learning models have placed more emphasis on the i...
research
06/02/2021

Is Sparse Attention more Interpretable?

Sparse attention has been claimed to increase model interpretability und...
research
03/05/2023

Comparative study of Transformer and LSTM Network with attention mechanism on Image Captioning

In a globalized world at the present epoch of generative intelligence, m...
research
02/25/2021

Retrieval Augmentation to Improve Robustness and Interpretability of Deep Neural Networks

Deep neural network models have achieved state-of-the-art results in var...
research
08/30/2018

Iterative Recursive Attention Model for Interpretable Sequence Classification

Natural language processing has greatly benefited from the introduction ...

Please sign up or login with your details

Forgot password? Click here to reset