Fast Monte-Carlo Approximation of the Attention Mechanism

01/30/2022
by   Hyunjun Kim, et al.
0

We introduce Monte-Carlo Attention (MCA), a randomized approximation method for reducing the computational cost of self-attention mechanisms in Transformer architectures. MCA exploits the fact that the importance of each token in an input sequence varies with respect to their attention scores; thus, some degree of error can be tolerable when encoding tokens with low attention. Using approximate matrix multiplication, MCA applies different error bounds to encode input tokens such that those with low attention scores are computed with relaxed precision, whereas errors of salient elements are minimized. MCA can operate in parallel with other attention optimization schemes and does not require model modification. We study the theoretical error bounds and demonstrate that MCA reduces attention complexity (in FLOPS) for various Transformer models by up to 11× in GLUE benchmarks without compromising model accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2021

Beyond Nyströmformer – Approximation of self-attention by Spectral Shifting

Transformer is a powerful tool for many natural language tasks which is ...
research
08/20/2021

Smart Bird: Learnable Sparse Attention for Efficient and Effective Transformer

Transformer has achieved great success in NLP. However, the quadratic co...
research
05/02/2020

Quantifying Attention Flow in Transformers

In the Transformer model, "self-attention" combines information from att...
research
09/28/2022

Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention

Vision transformer has emerged as a new paradigm in computer vision, sho...
research
02/28/2023

Sampled Transformer for Point Sets

The sparse transformer can reduce the computational complexity of the se...
research
05/30/2023

What and How does In-Context Learning Learn? Bayesian Model Averaging, Parameterization, and Generalization

In this paper, we conduct a comprehensive study of In-Context Learning (...
research
09/29/2020

Attention that does not Explain Away

Models based on the Transformer architecture have achieved better accura...

Please sign up or login with your details

Forgot password? Click here to reset