Adapting Language Models to Compress Contexts

05/24/2023
by   Alexis Chevalier, et al.
0

Transformer-based language models (LMs) are powerful and widely-applicable tools, but their usefulness is constrained by a finite context window and the expensive computational cost of processing long text documents. We propose to adapt pre-trained LMs into AutoCompressors. These models are capable of compressing long contexts into compact summary vectors, which are then accessible to the model as soft prompts. Summary vectors are trained with an unsupervised objective, whereby long documents are processed in segments and summary vectors from all previous segments are used in language modeling. We fine-tune OPT models on sequences of up to 30,720 tokens and show that AutoCompressors can utilize long contexts to improve perplexity. We evaluate AutoCompressors on in-context learning by compressing task demonstrations. We find that summary vectors are good substitutes for plain-text demonstrations, increasing accuracy while reducing inference cost. Finally, we explore the benefits of pre-computing summary vectors for large corpora by applying summary vectors to retrieval-augmented language modeling. Overall, AutoCompressors emerge as a simple and inexpensive solution for extending the context window of LMs while speeding up inference over long contexts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2021

What Context Features Can Transformer Language Models Use?

Transformer-based language models benefit from conditioning on contexts ...
research
05/25/2022

Training Language Models with Memory Augmentation

Recent work has improved language models remarkably by equipping them wi...
research
07/03/2023

Trainable Transformer in Transformer

Recent works attribute the capability of in-context learning (ICL) in la...
research
05/08/2023

A Frustratingly Easy Improvement for Position Embeddings via Random Padding

Position embeddings, encoding the positional relationships among tokens ...
research
06/23/2023

Long-range Language Modeling with Self-retrieval

Retrieval-augmented language models (LMs) have received much attention r...
research
06/27/2023

Extending Context Window of Large Language Models via Positional Interpolation

We present Position Interpolation (PI) that extends the context window s...
research
05/29/2023

Adapting Learned Sparse Retrieval for Long Documents

Learned sparse retrieval (LSR) is a family of neural retrieval methods t...

Please sign up or login with your details

Forgot password? Click here to reset