GMAT: Global Memory Augmentation for Transformers

06/05/2020 ∙ by Jonathan Berant, et al. ∙ 8

Transformer-based models have become ubiquitous in natural language processing thanks to their large capacity, innate parallelism and high performance. The contextualizing component of a Transformer block is the pairwise dot-product attention that has a large Ω(L^2) memory requirement for length L sequences, limiting its ability to process long documents. This has been the subject of substantial interest recently, where multiple approximations were proposed to reduce the quadratic memory requirement using sparse attention matrices. In this work, we propose to augment sparse Transformer blocks with a dense attention-based global memory of length M (≪ L) which provides an aggregate global view of the entire input sequence to each position. Our augmentation has a manageable O(M·(L+M)) memory overhead, and can be seamlessly integrated with prior sparse solutions. Moreover, global memory can also be used for sequence compression, by representing a long input sequence with the memory representations only. We empirically show that our method leads to substantial improvement on a range of tasks, including (a) synthetic tasks that require global reasoning, (b) masked language modeling, and (c) reading comprehension.



There are no comments yet.


page 1

page 2

page 3

page 4

Code Repositories


The accompanying code for the paper "GMAT: Global Memory Augmentation for Transformers" (Ankit Gupta and Jonathan Berant).

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.