DeepAI AI Chat
Log In Sign Up

Pay Less Attention with Lightweight and Dynamic Convolutions

01/29/2019
by   Felix Wu, et al.
0

Self-attention is a useful mechanism to build generative models for language and images. It determines the importance of context elements by comparing each element to the current time step. In this paper, we show that a very lightweight convolution can perform competitively to the best reported self-attention results. Next, we introduce dynamic convolutions which are simpler and more efficient than self-attention. We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements. The number of operations required by this approach scales linearly in the input length, whereas self-attention is quadratic. Experiments on large-scale machine translation, language modeling and abstractive summarization show that dynamic convolutions improve over strong self-attention models. On the WMT'14 English-German test set dynamic convolutions achieve a new state of the art of 29.7 BLEU.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/03/2019

Multiscale Self Attentive Convolutions for Vision and Language Modeling

Self attention mechanisms have become a key building block in many state...
02/08/2020

Time-aware Large Kernel Convolutions

To date, most state-of-the-art sequence modelling architectures use atte...
06/10/2021

Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models

In this paper, we detail the relationship between convolutions and self-...
07/12/2021

Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms

Self-Attention has become prevalent in computer vision models. Inspired ...
11/18/2019

Affine Self Convolution

Attention mechanisms, and most prominently self-attention, are a powerfu...
11/29/2021

On the Integration of Self-Attention and Convolution

Convolution and self-attention are two powerful techniques for represent...
05/20/2022

KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation

Relative positional embeddings (RPE) have received considerable attentio...