DeepAI AI Chat
Log In Sign Up

SimpleTron: Eliminating Softmax from Attention Computation

11/23/2021
by   Uladzislau Yorsh, et al.
Czech Technical University in Prague
0

In this paper, we propose that the dot product pairwise matching attention layer, which is widely used in transformer-based models, is redundant for the model performance. Attention in its original formulation has to be rather seen as a human-level tool to explore and/or visualize relevancy scores in the sequences. Instead, we present a simple and fast alternative without any approximation that, to the best of our knowledge, outperforms existing attention approximations on several tasks from the Long-Range Arena benchmark.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/25/2021

H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences

We describe an efficient hierarchical method to compute attention in the...
02/17/2022

cosFormer: Rethinking Softmax in Attention

Transformer has shown great successes in natural language processing, co...
10/14/2022

CAB: Comprehensive Attention Benchmarking on Long Sequence Modeling

Transformer has achieved remarkable success in language, image, and spee...
03/13/2022

Efficient Long-Range Attention Network for Image Super-resolution

Recently, transformer-based methods have demonstrated impressive results...
06/12/2022

ChordMixer: A Scalable Neural Attention Model for Sequences with Different Lengths

Sequential data naturally have different lengths in many domains, with s...
12/14/2021

Simple Local Attentions Remain Competitive for Long-Context Tasks

Many NLP tasks require processing long contexts beyond the length limit ...
06/16/2021

Invertible Attention

Attention has been proved to be an efficient mechanism to capture long-r...