Self-Attention Attribution: Interpreting Information Interactions Inside Transformer

04/23/2020
by   Yaru Hao, et al.
0

The great success of Transformer-based models benefits from the powerful multi-head self-attention mechanism, which learns token dependencies and encodes contextual information from the input. Prior work strives to attribute model decisions to individual input features with different saliency measures, but they fail to explain how these input features interact with each other to reach predictions. In this paper, we propose a self-attention attribution algorithm to interpret the information interactions inside Transformer. We take BERT as an example to conduct extensive studies. Firstly, we extract the most salient dependencies in each layer to construct an attribution graph, which reveals the hierarchical interactions inside Transformer. Furthermore, we apply self-attention attribution to identify the important attention heads, while others can be pruned with only marginal performance degradation. Finally, we show that the attribution results can be used as adversarial patterns to implement non-targeted attacks towards BERT.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/08/2022

Measuring the Mixing of Contextual Information in the Transformer

The Transformer architecture aggregates input information through the se...
research
04/10/2020

Telling BERT's full story: from Local Attention to Global Aggregation

We take a deep look into the behavior of self-attention heads in the tra...
research
08/30/2021

Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning

The highly popular Transformer architecture, based on self-attention, is...
research
11/03/2021

PhyloTransformer: A Discriminative Model for Mutation Prediction Based on a Multi-head Self-attention Mechanism

Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has caused ...
research
07/06/2023

Generalizing Backpropagation for Gradient-Based Interpretability

Many popular feature-attribution methods for interpreting deep neural ne...
research
04/04/2019

Visualizing Attention in Transformer-Based Language Representation Models

We present an open-source tool for visualizing multi-head self-attention...
research
10/10/2020

Structured Self-Attention Weights Encode Semantics in Sentiment Analysis

Neural attention, especially the self-attention made popular by the Tran...

Please sign up or login with your details

Forgot password? Click here to reset