Leveraging Pre-trained Checkpoints for Sequence Generation Tasks

07/29/2019
by   Sascha Rothe, et al.
0

Unsupervised pre-training of large neural models has recently revolutionized Natural Language Processing. Warm-starting from the publicly released checkpoints, NLP practitioners have pushed the state-of-the-art on multiple benchmarks while saving significant amounts of compute time. So far the focus has been mainly on the Natural Language Understanding tasks. In this paper, we present an extensive empirical study on the utility of initializing large Transformer-based sequence-to-sequence models with the publicly available pre-trained BERT and GPT-2 checkpoints for sequence generation. We have run over 300 experiments spending thousands of TPU hours to find the recipe that works best and demonstrate that it results in new state-of-the-art results on Machine Translation, Summarization, Sentence Splitting and Sentence Fusion.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2019

Pre-trained Language Model Representations for Language Generation

Pre-trained language model representations have been successful in a wid...
research
11/10/2019

INSET: Sentence Infilling with Inter-sentential Generative Pre-training

Missing sentence generation (or sentence infilling) fosters a wide range...
research
05/08/2019

Unified Language Model Pre-training for Natural Language Understanding and Generation

This paper presents a new Unified pre-trained Language Model (UniLM) tha...
research
05/19/2022

Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters

The ever-growing model size and scale of compute have attracted increasi...
research
05/11/2023

Advancing Neural Encoding of Portuguese with Transformer Albertina PT-*

To advance the neural encoding of Portuguese (PT), and a fortiori the te...
research
01/20/2020

Multi-level Head-wise Match and Aggregation in Transformer for Textual Sequence Matching

Transformer has been successfully applied to many natural language proce...
research
08/06/2021

Distilling Transformers for Neural Cross-Domain Search

Pre-trained transformers have recently clinched top spots in the gamut o...

Please sign up or login with your details

Forgot password? Click here to reset