A Cheap Linear Attention Mechanism with Fast Lookups and Fixed-Size Representations

09/19/2016
by   Alexandre de Brébisson, et al.
0

The softmax content-based attention mechanism has proven to be very beneficial in many applications of recurrent neural networks. Nevertheless it suffers from two major computational limitations. First, its computations for an attention lookup scale linearly in the size of the attended sequence. Second, it does not encode the sequence into a fixed-size representation but instead requires to memorize all the hidden states. These two limitations restrict the use of the softmax attention mechanism to relatively small-scale applications with short sequences and few lookups per sequence. In this work we introduce a family of linear attention mechanisms designed to overcome the two limitations listed above. We show that removing the softmax non-linearity from the traditional attention formulation yields constant-time attention lookups and fixed-size representations of the attended sequences. These properties make these linear attention mechanisms particularly suitable for large-scale applications with extreme query loads, real-time requirements and memory constraints. Early experiments on a question answering task show that these linear mechanisms yield significantly better accuracy results than no attention, but obviously worse than their softmax alternative.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/01/2017

Efficient Attention using a Fixed-Size Memory Representation

The standard content-based attention mechanism typically used in sequenc...
research
05/22/2017

A Regularized Framework for Sparse and Structured Neural Attention

Modern neural networks are often augmented with an attention mechanism, ...
research
06/03/2021

Luna: Linear Unified Nested Attention

The quadratic computational and memory complexities of the Transformer's...
research
07/15/2019

Agglomerative Attention

Neural networks using transformer-based architectures have recently demo...
research
08/16/2021

Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechanism

Softmax is widely used in neural networks for multiclass classification,...
research
12/15/2017

Pre-training Attention Mechanisms

Recurrent neural networks with differentiable attention mechanisms have ...
research
12/18/2018

Supervised Domain Enablement Attention for Personalized Domain Classification

In large-scale domain classification for natural language understanding,...

Please sign up or login with your details

Forgot password? Click here to reset