AtMan: Understanding Transformer Predictions Through Memory Efficient Attention Manipulation

01/19/2023
by   Mayukh Deb, et al.
36

Generative transformer models have become increasingly complex, with large numbers of parameters and the ability to process multiple input modalities. Current methods for explaining their predictions are resource-intensive. Most crucially, they require prohibitively large amounts of extra memory, since they rely on backpropagation which allocates almost twice as much GPU memory as the forward pass. This makes it difficult, if not impossible, to use them in production. We present AtMan that provides explanations of generative transformer models at almost no extra cost. Specifically, AtMan is a modality-agnostic perturbation method that manipulates the attention mechanisms of transformers to produce relevance maps for the input with respect to the output prediction. Instead of using backpropagation, AtMan applies a parallelizable token-based search method based on cosine similarity neighborhood in the embedding space. Our exhaustive experiments on text and image-text benchmarks demonstrate that AtMan outperforms current state-of-the-art gradient-based methods on several metrics while being computationally efficient. As such, AtMan is suitable for use in large model inference deployments.

READ FULL TEXT

page 1

page 5

page 7

page 12

page 13

page 14

research
04/23/2022

Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps

Transformer-based language models significantly advanced the state-of-th...
research
05/21/2023

Explaining How Transformers Use Context to Build Predictions

Language Generation Models produce words based on the previous context. ...
research
07/03/2023

Implicit Memory Transformer for Computationally Efficient Simultaneous Speech Translation

Simultaneous speech translation is an essential communication task diffi...
research
03/21/2023

MAGVLT: Masked Generative Vision-and-Language Transformer

While generative modeling on multimodal image-text data has been activel...
research
02/15/2022

XAI for Transformers: Better Explanations through Conservative Propagation

Transformers have become an important workhorse of machine learning, wit...
research
09/06/2022

Analyzing Transformers in Embedding Space

Understanding Transformer-based models has attracted significant attenti...
research
06/12/2023

Mitigating Transformer Overconfidence via Lipschitz Regularization

Though Transformers have achieved promising results in many computer vis...

Please sign up or login with your details

Forgot password? Click here to reset