DeepAI AI Chat
Log In Sign Up

H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences

07/25/2021
by   Zhenhai Zhu, et al.
Google
0

We describe an efficient hierarchical method to compute attention in the Transformer architecture. The proposed attention mechanism exploits a matrix structure similar to the Hierarchical Matrix (H-Matrix) developed by the numerical analysis community, and has linear run time and memory complexity. We perform extensive experiments to show that the inductive bias embodied by our hierarchical attention is effective in capturing the hierarchical structure in the sequences typical for natural language and vision tasks. Our method is superior to alternative sub-quadratic proposals by over +6 points on average on the Long Range Arena benchmark. It also sets a new SOTA test perplexity on One-Billion Word dataset with 5x fewer model parameters than that of the previous-best Transformer-based models.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/26/2022

Fast-FNet: Accelerating Transformer Encoder Models via Efficient Fourier Layers

Transformer-based language models utilize the attention mechanism for su...
11/23/2021

SimpleTron: Eliminating Softmax from Attention Computation

In this paper, we propose that the dot product pairwise matching attenti...
12/15/2022

Efficient Long Sequence Modeling via State Space Augmented Transformer

Transformer models have achieved superior performance in various natural...
02/17/2022

cosFormer: Rethinking Softmax in Attention

Transformer has shown great successes in natural language processing, co...
03/30/2020

A Hierarchical Transformer for Unsupervised Parsing

The underlying structure of natural language is hierarchical; words comb...
09/02/2019

Logic and the 2-Simplicial Transformer

We introduce the 2-simplicial Transformer, an extension of the Transform...
10/06/2021

Ripple Attention for Visual Perception with Sub-quadratic Complexity

Transformer architectures are now central to modeling in natural languag...