SAC: Accelerating and Structuring Self-Attention via Sparse Adaptive Connection

03/22/2020
by   Xiaoya Li, et al.
0

While the self-attention mechanism has been widely used in a wide variety of tasks, it has the unfortunate property of a quadratic cost with respect to the input length, which makes it difficult to deal with long inputs. In this paper, we present a method for accelerating and structuring self-attentions: Sparse Adaptive Connection (SAC). In SAC, we regard the input sequence as a graph and attention operations are performed between linked nodes. In contrast with previous self-attention models with pre-defined structures (edges), the model learns to construct attention edges to improve task-specific performances. In this way, the model is able to select the most salient nodes and reduce the quadratic complexity regardless of the sequence length. Based on SAC, we show that previous variants of self-attention models are its special cases. Through extensive experiments on neural machine translation, language modeling, graph representation learning and image classification, we demonstrate SAC is competitive with state-of-the-art models while significantly reducing memory cost.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/22/2023

Simple parameter-free self-attention approximation

The hybrid model of self-attention and convolution is one of the methods...
research
03/12/2020

Efficient Content-Based Sparse Attention with Routing Transformers

Self-attention has recently been adopted for a wide range of sequence mo...
research
09/11/2022

On The Computational Complexity of Self-Attention

Transformer architectures have led to remarkable progress in many state-...
research
12/10/2021

Self-attention Does Not Need O(n^2) Memory

We present a very simple algorithm for attention that requires O(1) memo...
research
01/29/2019

Pay Less Attention with Lightweight and Dynamic Convolutions

Self-attention is a useful mechanism to build generative models for lang...
research
12/01/2021

Graph Conditioned Sparse-Attention for Improved Source Code Understanding

Transformer architectures have been successfully used in learning source...
research
05/16/2019

Incorporating Sememes into Chinese Definition Modeling

Chinese definition modeling is a challenging task that generates a dicti...

Please sign up or login with your details

Forgot password? Click here to reset