Scaling Transformer to 1M tokens and beyond with RMT

04/19/2023
by   Aydar Bulatov, et al.
1

This technical report presents the application of a recurrent memory to extend the context length of BERT, one of the most effective Transformer-based models in natural language processing. By leveraging the Recurrent Memory Transformer architecture, we have successfully increased the model's effective context length to an unprecedented two million tokens, while maintaining high memory retrieval accuracy. Our method allows for the storage and processing of both local and global information and enables information flow between segments of the input sequence through the use of recurrence. Our experiments demonstrate the effectiveness of our approach, which holds significant potential to enhance long-term dependency handling in natural language understanding and generation tasks as well as enable large-scale context processing for memory-intensive applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/14/2022

Recurrent Memory Transformer

Transformer-based models show their effectiveness across multiple domain...
research
07/30/2023

LaFiCMIL: Rethinking Large File Classification from the Perspective of Correlated Multiple Instance Learning

Transformer-based models have revolutionized the performance of a wide r...
research
05/05/2023

Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation

Unlike recurrent models, conventional wisdom has it that Transformers ca...
research
06/15/2023

Recurrent Memory Decision Transformer

Transformative models, originally developed for natural language problem...
research
05/25/2021

Context-Sensitive Visualization of Deep Learning Natural Language Processing Models

The introduction of Transformer neural networks has changed the landscap...
research
05/25/2023

Landmark Attention: Random-Access Infinite Context Length for Transformers

While transformers have shown remarkable success in natural language pro...
research
10/06/2022

ByteTransformer: A High-Performance Transformer Boosted for Variable-Length Inputs

Transformer is the cornerstone model of Natural Language Processing (NLP...

Please sign up or login with your details

Forgot password? Click here to reset