FNet: Mixing Tokens with Fourier Transforms

05/09/2021
by   James Lee-Thorp, et al.
0

We show that Transformer encoder architectures can be massively sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations that "mix" input tokens. These linear transformations, along with simple nonlinearities in feed-forward layers, are sufficient to model semantic relationships in several text classification tasks. Perhaps most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder with a standard, unparameterized Fourier Transform achieves 92 to seven times faster on GPUs and twice as fast on TPUs. The resulting model, which we name FNet, scales very efficiently to long inputs, matching the accuracy of the most accurate "efficient" Transformers on the Long Range Arena benchmark, but training and running faster across all sequence lengths on GPUs and relatively shorter sequence lengths on TPUs. Finally, FNet has a light memory footprint and is particularly efficient at smaller model sizes: for a fixed speed and accuracy budget, small FNet models outperform Transformer counterparts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/22/2021

FNetAR: Mixing Tokens with Autoregressive Fourier Transforms

In this note we examine the autoregressive generalization of the FNet al...
research
05/24/2023

Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator

The transformer model is known to be computationally demanding, and proh...
research
05/22/2023

FIT: Far-reaching Interleaved Transformers

We present FIT: a transformer-based architecture with efficient self-att...
research
10/06/2021

PoNet: Pooling Network for Efficient Token Mixing in Long Sequences

Transformer-based models have achieved great success in various NLP, vis...
research
05/31/2023

Recasting Self-Attention with Holographic Reduced Representations

In recent years, self-attention has become the dominant paradigm for seq...
research
04/13/2020

ProFormer: Towards On-Device LSH Projection Based Transformers

At the heart of text based neural models lay word representations, which...
research
10/05/2022

Waveformer: Linear-Time Attention with Forward and Backward Wavelet Transform

We propose Waveformer that learns attention mechanism in the wavelet coe...

Please sign up or login with your details

Forgot password? Click here to reset