SimpleTron: Eliminating Softmax from Attention Computation

11/23/2021
by   Uladzislau Yorsh, et al.
0

In this paper, we propose that the dot product pairwise matching attention layer, which is widely used in transformer-based models, is redundant for the model performance. Attention in its original formulation has to be rather seen as a human-level tool to explore and/or visualize relevancy scores in the sequences. Instead, we present a simple and fast alternative without any approximation that, to the best of our knowledge, outperforms existing attention approximations on several tasks from the Long-Range Arena benchmark.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset