Investigating Efficiently Extending Transformers for Long Input Summarization

08/08/2022
by   Jason Phang, et al.
0

While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs continues to be a significant challenge. One such task is long input summarization, where inputs are longer than the maximum input context of most pretrained models. Through an extensive set of experiments, we investigate what model architectural changes and pretraining paradigms can most efficiently adapt a pretrained Transformer for long input summarization. We find that a staggered, block-local Transformer with global encoder tokens strikes a good balance of performance and efficiency, and that an additional pretraining phase on long sequences meaningfully improves downstream summarization performance. Based on our findings, we introduce PEGASUS-X, an extension of the PEGASUS model with additional long input pretraining to handle inputs of up to 16K tokens. PEGASUS-X achieves strong performance on long input summarization tasks comparable with much larger models while adding few additional parameters and not requiring model parallelism to train.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/30/2022

BudgetLongformer: Can we Cheaply Pretrain a SotA Legal Language Model From Scratch?

Pretrained transformer models have achieved state-of-the-art results in ...
research
05/02/2023

Unlimiformer: Long-Range Transformers with Unlimited Length Input

Transformer-based models typically have a predefined bound to their inpu...
research
09/21/2022

Adapting Pretrained Text-to-Text Models for Long Text Sequences

We present an empirical study of adapting an existing pretrained text-to...
research
09/23/2019

Multi-stage Pretraining for Abstractive Summarization

Neural models for abstractive summarization tend to achieve the best per...
research
12/03/2022

Global memory transformer for processing long documents

Transformer variants dominate the state-of-the-art in different natural ...
research
10/12/2022

Prompt Generation Networks for Efficient Adaptation of Frozen Vision Transformers

Large-scale pretrained models, especially those trained from vision-lang...
research
07/17/2023

BASS: Block-wise Adaptation for Speech Summarization

End-to-end speech summarization has been shown to improve performance ov...

Please sign up or login with your details

Forgot password? Click here to reset