BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

10/29/2019
by   Mike Lewis, et al.
35

We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/28/2022

Sequence to sequence pretraining for a less-resourced Slovenian language

Large pretrained language models have recently conquered the area of nat...
11/08/2016

Unsupervised Pretraining for Sequence to Sequence Learning

Sequence to sequence models are successful tools for supervised sequence...
07/23/2019

MacNet: Transferring Knowledge from Machine Comprehension to Sequence-to-Sequence Models

Machine Comprehension (MC) is one of the core problems in natural langua...
11/10/2019

Distilling the Knowledge of BERT for Text Generation

Large-scale pre-trained language model, such as BERT, has recently achie...
12/04/2020

RPT: Relational Pre-trained Transformer Is Almost All You Need towards Democratizing Data Preparation

Can AI help automate human-easy but computer-hard data preparation tasks...
05/18/2021

Representation Learning in Sequence to Sequence Tasks: Multi-filter Gaussian Mixture Autoencoder

Heterogeneity of sentences exists in sequence to sequence tasks such as ...
07/17/2021

Generative Pretraining for Paraphrase Evaluation

We introduce ParaBLEU, a paraphrase representation learning model and ev...

Code Repositories

KQAPro_Baselines

Pytorch implementation of baseline models of KQA Pro, a large-scale dataset of complex question answering over knowledge base.


view repo

BART_on_COVID_dialogue

Fine-tuning BART on COVID Dialogue Dataset


view repo

i2r-simmc-2020

Codes submitted to SIMMC challenge (https://github.com/facebookresearch/simmc), a track of DSTC 9 (https://dstc9.dstc.community/home)


view repo

SemanticSearch

Chromium based extension which allows you to semmanticaly search web pages using a state of the art NLP model.


view repo

Please sign up or login with your details

Forgot password? Click here to reset