Waveformer: Linear-Time Attention with Forward and Backward Wavelet Transform

10/05/2022
by   Yufan Zhuang, et al.
25

We propose Waveformer that learns attention mechanism in the wavelet coefficient space, requires only linear time complexity, and enjoys universal approximating power. Specifically, we first apply forward wavelet transform to project the input sequences to multi-resolution orthogonal wavelet bases, then conduct nonlinear transformations (in this case, a random feature kernel) in the wavelet coefficient space, and finally reconstruct the representation in input space via backward wavelet transform. We note that other non-linear transformations may be used, hence we name the learning paradigm Wavelet transformatIon for Sequence lEarning (WISE). We emphasize the importance of backward reconstruction in the WISE paradigm – without it, one would be mixing information from both the input space and coefficient space through skip connections, which shall not be considered as mathematically sound. Compared with Fourier transform in recent works, wavelet transform is more efficient in time complexity and better captures local and positional information; we further support this through our ablation studies. Extensive experiments on seven long-range understanding datasets from the Long Range Arena benchmark and code understanding tasks demonstrate that (1) Waveformer achieves competitive and even better accuracy than a number of state-of-the-art Transformer variants and (2) WISE can boost accuracies of various attention approximation methods without increasing the time complexity. These together showcase the superiority of learning attention in a wavelet coefficient space over the input space.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2010

Multiplierless Modules for Forward and Backward Integer Wavelet Transform

This article is about the architecture of a lossless wavelet filter bank...
research
09/20/2023

WFTNet: Exploiting Global and Local Periodicity in Long-term Time Series Forecasting

Recent CNN and Transformer-based models tried to utilize frequency and p...
research
02/17/2022

cosFormer: Rethinking Softmax in Attention

Transformer has shown great successes in natural language processing, co...
research
03/23/2022

Linearizing Transformer with Key-Value Memory Bank

Transformer has brought great success to a wide range of natural languag...
research
05/09/2021

FNet: Mixing Tokens with Fourier Transforms

We show that Transformer encoder architectures can be massively sped up,...
research
12/21/2020

Sub-Linear Memory: How to Make Performers SLiM

The Transformer architecture has revolutionized deep learning on sequent...
research
06/13/2020

Historical traffic flow data reconstrucion applying Wavelet Transform

Despite the importance of fundamental parameters (traffic flow, density ...

Please sign up or login with your details

Forgot password? Click here to reset