SimpleTron: Eliminating Softmax from Attention Computation

11/23/2021
by   Uladzislau Yorsh, et al.
0

In this paper, we propose that the dot product pairwise matching attention layer, which is widely used in transformer-based models, is redundant for the model performance. Attention in its original formulation has to be rather seen as a human-level tool to explore and/or visualize relevancy scores in the sequences. Instead, we present a simple and fast alternative without any approximation that, to the best of our knowledge, outperforms existing attention approximations on several tasks from the Long-Range Arena benchmark.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2021

H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences

We describe an efficient hierarchical method to compute attention in the...
research
02/17/2022

cosFormer: Rethinking Softmax in Attention

Transformer has shown great successes in natural language processing, co...
research
10/14/2022

CAB: Comprehensive Attention Benchmarking on Long Sequence Modeling

Transformer has achieved remarkable success in language, image, and spee...
research
08/30/2019

Transformer Dissection: An Unified Understanding for Transformer's Attention via the Lens of Kernel

Transformer is a powerful architecture that achieves superior performanc...
research
06/12/2022

ChordMixer: A Scalable Neural Attention Model for Sequences with Different Lengths

Sequential data naturally have different lengths in many domains, with s...
research
06/16/2021

Invertible Attention

Attention has been proved to be an efficient mechanism to capture long-r...
research
05/25/2020

Attention-based Neural Bag-of-Features Learning for Sequence Data

In this paper, we propose 2D-Attention (2DA), a generic attention formul...

Please sign up or login with your details

Forgot password? Click here to reset