Encode, Tag, Realize: High-Precision Text Editing

09/03/2019
by   Eric Malmi, et al.
0

We propose LaserTagger - a sequence tagging approach that casts text generation as a text editing task. Target texts are reconstructed from the inputs using three main edit operations: keeping a token, deleting it, and adding a phrase before the token. To predict the edit operations, we propose a novel model, which combines a BERT encoder with an autoregressive Transformer decoder. This approach is evaluated on English text on four tasks: sentence fusion, sentence splitting, abstractive summarization, and grammar correction. LaserTagger achieves new state-of-the-art results on three of these tasks, performs comparably to a set of strong seq2seq baselines with a large number of training examples, and outperforms them when the number of examples is limited. Furthermore, we show that at inference time tagging can be more than two orders of magnitude faster than comparable seq2seq models, making it more attractive for running in a live environment.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2020

Seq2Edits: Sequence Transduction Using Span-level Edit Operations

We propose Seq2Edits, an open-vocabulary approach to sequence editing fo...
research
03/24/2020

Felix: Flexible Text Editing Through Tagging and Insertion

We present Felix — a flexible text-editing approach for generation, desi...
research
06/14/2022

Text Generation with Text-Editing Models

Text-editing models have recently become a prominent alternative to seq2...
research
03/18/2022

GRS: Combining Generation and Revision in Unsupervised Sentence Simplification

We propose GRS: an unsupervised approach to sentence simplification that...
research
08/09/2022

High Recall Data-to-text Generation with Progressive Edit

Data-to-text (D2T) generation is the task of generating texts from struc...
research
09/26/2020

Recurrent Inference in Text Editing

In neural text editing, prevalent sequence-to-sequence based approaches ...
research
06/29/2021

Don't Take It Literally: An Edit-Invariant Sequence Loss for Text Generation

Neural text generation models are typically trained by maximizing log-li...

Please sign up or login with your details

Forgot password? Click here to reset