Sequence-to-sequence Pre-training with Data Augmentation for Sentence Rewriting

09/13/2019
by   Yi Zhang, et al.
0

We study sequence-to-sequence (seq2seq) pre-training with data augmentation for sentence rewriting. Instead of training a seq2seq model with gold training data and augmented data simultaneously, we separate them to train in different phases: pre-training with the augmented data and fine-tuning with the gold data. We also introduce multiple data augmentation methods to help model pre-training for sentence rewriting. We evaluate our approach in two typical well-defined sentence rewriting tasks: Grammatical Error Correction (GEC) and Formality Style Transfer (FST). Experiments demonstrate our approach can better utilize augmented data without hurting the model's trust in gold data and further improve the model's performance with our proposed data augmentation methods. Our approach substantially advances the state-of-the-art results in well-recognized sentence rewriting benchmarks over both GEC and FST. Specifically, it pushes the CoNLL-2014 benchmark's F_0.5 score and JFLEG Test GLEU score to 62.61 and 63.54 in the restricted training setting, 66.77 and 65.22 respectively in the unrestricted setting, and advances GYAFC benchmark's BLEU to 74.24 (2.23 absolute improvement) in E&M domain and 77.97 (2.64 absolute improvement) in F&R domain.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/14/2020

Parallel Data Augmentation for Formality Style Transfer

The main barrier to progress in the task of Formality Style Transfer is ...
research
03/01/2019

Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data

Neural machine translation systems have become state-of-the-art approach...
research
06/13/2023

Rethink the Effectiveness of Text Data Augmentation: An Empirical Analysis

In recent years, language models (LMs) have made remarkable progress in ...
research
12/01/2020

Denoising Pre-Training and Data Augmentation Strategies for Enhanced RDF Verbalization with Transformers

The task of verbalization of RDF triples has known a growth in popularit...
research
09/07/2021

GOLD: Improving Out-of-Scope Detection in Dialogues using Data Augmentation

Practical dialogue systems require robust methods of detecting out-of-sc...
research
10/16/2020

Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks

There are two approaches for pairwise sentence scoring: Cross-encoders, ...
research
09/18/2017

Sequence to Sequence Learning for Event Prediction

This paper presents an approach to the task of predicting an event descr...

Please sign up or login with your details

Forgot password? Click here to reset