Adapting Pretrained Text-to-Text Models for Long Text Sequences

09/21/2022
by   Wenhan Xiong, et al.
5

We present an empirical study of adapting an existing pretrained text-to-text model for long-sequence inputs. Through a comprehensive study along three axes of the pretraining pipeline – model architecture, optimization objective, and pretraining corpus, we propose an effective recipe to build long-context models from existing short-context models. Specifically, we replace the full attention in transformers with pooling-augmented blockwise attention, and pretrain the model with a masked-span prediction task with spans of varying length. In terms of the pretraining corpus, we find that using randomly concatenated short-documents from a large open-domain corpus results in better performance than using existing long document corpora which are typically limited in their domain coverage. With these findings, we build a long-context model that achieves competitive performance on long-text QA tasks and establishes the new state of the art on five long-text summarization datasets, often outperforming previous methods with larger model sizes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2022

LinkBERT: Pretraining Language Models with Document Links

Language model (LM) pretraining can learn various knowledge from text co...
research
08/08/2022

Investigating Efficiently Extending Transformers for Long Input Summarization

While large pretrained Transformer models have proven highly capable at ...
research
12/14/2021

Simple Local Attentions Remain Competitive for Long-Context Tasks

Many NLP tasks require processing long contexts beyond the length limit ...
research
05/10/2021

Poolingformer: Long Document Modeling with Pooling Attention

In this paper, we introduce a two-level attention schema, Poolingformer,...
research
08/01/2022

Efficient Long-Text Understanding with Short-Text Models

Transformer-based pretrained language models (LMs) are ubiquitous across...
research
12/20/2022

Socratic Pretraining: Question-Driven Pretraining for Controllable Summarization

In long document controllable summarization, where labeled data is scarc...
research
06/01/2023

STEVE-1: A Generative Model for Text-to-Behavior in Minecraft

Constructing AI models that respond to text instructions is challenging,...

Please sign up or login with your details

Forgot password? Click here to reset