Insertion Transformer: Flexible Sequence Generation via Insertion Operations

02/08/2019
by   Mitchell Stern, et al.
0

We present the Insertion Transformer, an iterative, partially autoregressive model for sequence generation based on insertion operations. Unlike typical autoregressive models which rely on a fixed, often left-to-right ordering of the output, our approach accommodates arbitrary orderings by allowing for tokens to be inserted anywhere in the sequence during decoding. This flexibility confers a number of advantages: for instance, not only can our model be trained to follow specific orderings such as left-to-right generation or a binary tree traversal, but it can also be trained to maximize entropy over all valid insertions for robustness. In addition, our model seamlessly accommodates both fully autoregressive generation (one insertion at a time) and partially autoregressive generation (simultaneous insertions at multiple locations). We validate our approach by analyzing its performance on the WMT 2014 English-German machine translation task under various settings for training and decoding. We find that the Insertion Transformer outperforms many prior non-autoregressive approaches to translation at comparable or better levels of parallelism, and successfully recovers the performance of the original Transformer while requiring only logarithmically many iterations during decoding.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/04/2019

Insertion-based Decoding with Automatically Inferred Generation Order

Conventional neural autoregressive decoding commonly assumes a left-to-r...
research
05/27/2019

Levenshtein Transformer

Modern neural sequence generation models are built to either generate to...
research
06/04/2019

KERMIT: Generative Insertion-Based Modeling for Sequences

We present KERMIT, a simple insertion-based approach to generative model...
research
10/19/2020

Infusing Sequential Information into Conditional Masked Translation Model with Self-Review Mechanism

Non-autoregressive models generate target words in a parallel way, which...
research
12/22/2021

Diformer: Directional Transformer for Neural Machine Translation

Autoregressive (AR) and Non-autoregressive (NAR) models have their own s...
research
09/01/2018

Beyond Error Propagation in Neural Machine Translation: Characteristics of Language Also Matter

Neural machine translation usually adopts autoregressive models and suff...
research
06/13/2022

On the Learning of Non-Autoregressive Transformers

Non-autoregressive Transformer (NAT) is a family of text generation mode...

Please sign up or login with your details

Forgot password? Click here to reset