Big Bird: Transformers for Longer Sequences

07/28/2020
by   Manzil Zaheer, et al.
72

Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

04/17/2020

ETC: Encoding Long and Structured Data in Transformers

Transformer-based models have pushed the state of the art in many natura...
01/27/2022

Clinical-Longformer and Clinical-BigBird: Transformers for long clinical sequences

Transformers-based models, such as BERT, have dramatically improved the ...
02/13/2022

Flowformer: Linearizing Transformers with Conservation Flows

Transformers based on the attention mechanism have achieved impressive s...
02/28/2022

Dynamic N:M Fine-grained Structured Sparse Attention Mechanism

Transformers are becoming the mainstream solutions for various tasks lik...
08/27/2020

Time-based Sequence Model for Personalization and Recommendation Systems

In this paper we develop a novel recommendation model that explicitly in...
06/16/2020

On the Computational Power of Transformers and Its Implications in Sequence Modeling

Transformers are being used extensively across several sequence modeling...
06/02/2021

On the Distribution, Sparsity, and Inference-time Quantization of Attention Values in Transformers

How much information do NLP tasks really need from a transformer's atten...

Code Repositories

chinese-bigbird

中文bigbird预训练模型


view repo

minGPT-with-BigBird

dd2412 project at KTH


view repo

ko_bigbird

Develop analysis model from bigbird model


view repo

CS523-summer2021

Final Project for CS523


view repo

minGPT-with-BigBird

An implementation of the minGPT architecture using BigBird masking.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.