Insertion-based Decoding with Automatically Inferred Generation Order

02/04/2019 ∙ by Jiatao Gu, et al. ∙ 0

Conventional neural autoregressive decoding commonly assumes a left-to-right generation order. In this work, we propose a novel decoding algorithm -- INDIGO -- which supports flexible generation in an arbitrary order with the help of insertion operations. We use Transformer, a state-of-the-art sequence generation model, to efficiently implement the proposed approach, enabling it to be trained with either a pre-defined generation order or an adaptive order searched based on the model's own preference. Experiments on three real-world tasks, including machine translation, word order recovery and code generation, demonstrate that our algorithm can generate sequences in an arbitrary order, while achieving competitive or even better performance compared to the conventional left-to-right generation. Case studies show that INDIGO adopts adaptive generation orders based on input information.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Neural autoregressive models have become the

de facto standard in a wide range of sequence generation tasks, such as machine translation Bahdanau et al. (2014), summarization Rush et al. (2015) and dialogue systems Vinyals and Le (2015). In these studies, a sequence is modeled autoregressively with the left-to-right generation order, which raises the question of whether generation in an arbitrary order is worth considering Vinyals et al. (2015a); Ford et al. (2018). Nevertheless, previous studies on generation orders mostly resort to a fixed set of generation orders, showing particular choices of ordering are helpful Wu et al. (2018); Ford et al. (2018); Mehri and Sigal (2018), without providing an efficient algorithm for finding adaptive generation orders, or restrict the problem scope to -gram segment generation Vinyals et al. (2015a).

Figure 1: An example of InDIGO. At each step, we simultaneously predict the next token and its (relative) position to be inserted. The final output sequence is obtained by mapping the words based on their positions.

In this paper, we propose a novel decoding algorithm, Insertion-based Decoding with Inferred Generation Order (InDIGO), which models generation orders as latent variables and automatically infers the generation orders by simultaneously predicting a word and its position to be inserted at each decoding step. Given that absolute positions are unknown before generating the whole sequence, we use a relative-position-based representation to capture generation orders. We show that decoding consists of a series of insertion operations with a demonstration shown in Fig. 1.

We extend Transformer Vaswani et al. (2017) for supporting insertion operations, where the generation order is directly captured as relative positions through self-attention inspired by Shaw et al. (2018). For learning, we maximize the evidence lower-bound (ELBO) of the maximum likelihood objective, and study two approximate posterior distributions of generation orders based on a pre-defined generation order and adaptive orders obtained from beam-search, respectively.

Experimental results on word order recovery, machine translation, code generation and image caption demonstrate that our algorithm can generate sequences with arbitrary orders, while achieving competitive or even better performance compared to the conventional left-to-right generation. Case studies show that the proposed method adopts adaptive orders based on input information.

2 Neural Autoregressive Decoding

Let us consider the problem of generating a sequence conditioned on some inputs, e.g., a source sequence . Our goal is to build a model parameterized by

that models the conditional probability of

given , which is factorized as:

(1)

where and are special tokens and , respectively. The model sequentially predicts the conditional probability of the next token at each step , which can be implemented by any function approximator such as RNNs Bahdanau et al. (2014) and Transformer Vaswani et al. (2017).

Learning

Neural autoregressive model is commonly learned by maximizing the conditional likelihood given a set of parallel examples.

Decoding

A common way to decode a sequence from a trained model is to make use of the autoregressive nature that allows us to predict one word at each step. Given any source

, we essentially follow the order of factorization to generate tokens sequentially using some heuristic-based algorithms such as greedy decoding and beam search.

3 Insertion-based Decoding with Inferred Generation Order (InDIGO)

Eq. 1 explicitly assumes a left-to-right (L2R) generation order of the sequence . In principle, we can factorize the sequence probability in any permutation and train a model for each permutation separately. As long as we have infinite amount of data with proper optimization performed, all these models are equivalent. Nevertheless, Vinyals et al. (2015a) have shown that the generation order of a sequence actually matters in many real-world tasks, e.g. language modeling.

Although the L2R order is a strong inductive bias, as it is ‘‘natural’’ for most human-beings to read and write sequences from left to right, L2R is not necessarily the optimal option for generating sequences. For instance, people sometimes tend to think of central phrases first before building up a whole sentence; For programming languages, it is beneficial to be generated based on abstract syntax trees Yin and Neubig (2017).

Therefore, a natural question arises, how can we decode a sequence in its best order?

3.1 Orders as Latent Variables

We address this question by modeling generation orders as latent variables. Similar to Vinyals et al. (2015a), we rewrite the target sequence in a particular order 111 is the set of all the permutations of . as a set , where represents the -th generated token and its absolute position, respectively. Different from the common notation, the target sequence is -step drifted because the two special tokens and

are always prepended to represent the left and right boundaries, respectively. Then, we model the conditional probability as the joint distribution of words and positions by marginalizing all the orders:

where for each element:

(2)

where the third special token is introduced to signal the end-of-decoding, and is the end-of-decoding probability.

At decoding time, the factorization allows us to decode autoregressively by predicting word and its position step by step. The generation order is automatically inferred during decoding.

3.2 Relative Representation of Positions

It is difficult and inefficient to predict the absolute positions without knowing the actual length . One solution is directly using the absolute positions of the partial sequence at each autoregressive step . For example, the absolute positions for the sequence (, , dream, I) are in Fig. 1 at step

. It is however inefficient to model such explicit positions using a single neural network without recomputing the hidden states for the entire partial sequence, as some positions are changed at every step (as shown in Fig. 

1).

Relative Positions

We propose using relative-position representations instead of absolute positions

. We use a ternary vector

as the relative-position representation for . The -th element of is defined as:

(3)

where the elements of show the relative positions with respect to all the other words in the partial sequence at step . We use a matrix to show the relative-position representations of all the words in the sequence. The relative-position representation can always be mapped back to the absolute position by:

(4)

One of the biggest advantages for using such vector-based representations is that at each step, updating the relative-position representations is simply extending the relative-position matrix with the next predicted relative position, because the (left, middle, right) relations described in Eq. (3) stay unchanged once they are created. Thus, we update as follows:

(5)

where we use to represent the relative position at step . This append-only property enables our method to reuse the previous hidden states without recomputing the hidden states at each step. For simplicity, the superscript of is omitted from now on without causing conflicts.

3.3 Insertion-based Decoding

Given a partial sequence and its corresponding relative-position representations , not all of the possible vectors are valid for the next relative-position representation, . Only these vectors corresponding to insertion operations satisfy Eq. (4). In Algorithm 1, we describe an insertion-based decoding framework based on this observation. The next word is predicted based on and . We then choose an existing word ()) from and insert to its left or right. As a result, the next position is determined by

(6)

where if is on the left of , and otherwise. Finally, we use to update the relative-position matrix as shown in Eq. (5).

  Initialize: , ,
  repeat
     Predict the next word based on .
     if  is  then
         break
     end if
     Choose an existing word ;
     Choose the left or right () of to insert;
     Obtain the next position with (Eq. (6)).
     Update by appending (Eq. (5)).
     Update by appending
     Update
  until Reach the maximum length
  Map back to absolute positions (Eq. (4))
  Reorder :
Algorithm 1 Insertion-based Decoding

4 Model

We present Transformer-InDIGO, an extension of Transformer Vaswani et al. (2017), supporting insertion-based decoding. To the best of our knowledge, Transformer-InDIGO is the first probabilistic model that takes generation orders for autoregressive decoding into account. The overall framework is shown in Fig. 2.

Figure 2: The overall framework of the proposed Transformer-InDIGO which includes (a) the word & position prediction module; (b) the one step decoding with position updating; (c) final decoding output by reordering.

4.1 Network Design

We extend the decoder of Transformer with relative-position-based self-attention, joint word & position prediction and position updating modules.

Self-Attention

One of the major challenges that prevents the vanilla Transformer from generating sequences following arbitrary orders is that the absolute-position-based positional encodings are inefficient as mentioned in Section 3.2, in that absolute positions are changed during decoding, invalidating the previous hidden states. In contrast, we adapt shaw2018self to use relative positions in self-attention. Different from Shaw et al. (2018), in which a clipping distance (usually ) is set for relative positions, our relative-position representations only preserve relations (Eq. (3)).

Each attention head in a multi-head self-attention module of Transformer-InDIGO takes the hidden states of a partial sequence , denoted as , and its corresponding relative position matrix as input, where each input state

. The logit

for attention is computed as:

(7)

where and are parameter matrices. is the row vector indexed by , which biases all the input keys based on the relative position, .

Word & Position Prediction

Like the vanilla Transformer, we take the representations from the last layer of self-attention, and , to predict both the next word and its position vector in two stages based on the following factorization:

The prediction module for word & position prediction are shown in Fig. 2(a).

First, we predict the next word from the categorical distribution as:

(8)

where is the embedding matrix and is the size of vocabulary. We linearly project the last representation using for querying .

Then, as shown in Eq. (6), the prediction of the next position is done by performing insertion operations to existing words which can be modeled similarly to Pointer Networks Vinyals et al. (2015b). We predict a pointer based on:

(9)

where are parameter matrices and is the embedding of the predicted word. are used to obtain the left and right keys, respectively, considering that each word has two ‘‘keys’’ (its left and right) for inserting the generated word. The query vector is obtained by adding up the word embedding , and the linearly projected state, . The resulting relative-position vector, is computed using according to Eq. (6). We manually set to avoid any word from being inserted to the left of and the right of .

Pre-defined Order Descriptions
Left-to-right (L2R) Generate words from left to right. Wu et al. (2018)
Right-to-left (R2L) Generate words from right to left. Wu et al. (2018)
Odd-Even (ODD) Generate words at odd positions from left to right, then generate even positions. Ford et al. (2018)
Balanced-tree (BLT) Generate words with a top-down left-to-right order from a balanced binary tree. Stern et al. (2019)
Syntax-tree (SYN) Generate words with a top-down left-to-right order from the dependency tree. Wang et al. (2018b)
Common-First (CF) Generate all common words first from left to right, and then generate the others. Ford et al. (2018)
Rare-First (RF) Generate all rare words first from left to right, and then generate the remaining. Ford et al. (2018)
Random (RND) Generate words in a random order shuffled every time the example was loaded.
Table 1: Descriptions of the pre-defined orders used in this work. Major references that have explored these generation orders with different models and applications are also marked.

Position Updating

As mentioned in Sec. 3.1, we update the relative position representation with the predicted . Because updating the relative positions will not change the pre-computed relative-position representations, Transformer-InDIGO can reuse the previous hidden states in the next decoding step the same as the vanilla Transformer.

4.2 Learning

Training requires maximizing the marginalized likelihood in Eq. (2). Yet this is intractable since we need to enumerate all of the permutations of tokens. Instead, we maximize the evidence lower-bound (ELBO) of the original objective by introducing an approximate posterior distribution of generation orders , which provides the probabilities of latent generation orders based on the ground-truth sequences and :

(10)

where , sampled from , is represented as relative positions. is the entropy term which can be ignored if is fixed. Eq. (10) shows that given a sampled order, the learning objective is divided into word & position objectives. For calculating the position prediction loss, we aggregate the two probabilities corresponding to the same position by

(11)

where and are calculated simultaneously from the same softmax function in Eq. (9). represent the keys corresponding to the same relative position. Here, we study two types of :

Pre-defined Order

If we already possess some prior knowledge about the sequence, e.g., the L2R order is proven to be a strong baseline in many scenarios, we assume a Dirac-delta distribution , where is a predefined order. In this work, we study a set of pre-defined orders which can be found in Table. 1, for evaluating their effect on generation.

Searched Adaptive Order (SAO)

We choose the approximate posterior

as the point estimation that maximizes

. In practice, we approximate these generation orders through beam-search (Pal et al., 2006). Unlike the original beam-search for autoregressive decoding that searches in the sequence space to find the sequence maximizing the probability shown in Eq. 1, we search in the space of all the permutations of the target sequence to find maximising Eq. 2, as all the target tokens are known in advance during training.

More specifically, at each step , for every sub-sequence , we evaluate the probabilities of every possible choice from the left words and its corresponding position . We calculate the cumulative likelihood for each , based on which we select top- sub-sequences as the new set for the next step. After obtaining the generation orders, we optimize our objective as an average over these orders:

(12)

where we assume .

Beam Search with Dropout

The goal of beam search is to approximately find the most likely generation orders, which limits learning from exploring other generation orders that may not be favourable currently but may ultimately be deemed better. Prior research Vijayakumar et al. (2016) also pointed out that the search space of the standard beam-search is restricted. We encourage exploration by injecting noise during beam search Cho (2016). Particularly, we found it effective to keep the dropout on (e.g. dropout ).

Bootstrapping from a Pre-defined Order

During preliminary experiments, sequences returned by beam-search were often degenerated by always predicting common or functional words (e.g. ‘‘the’’, ‘‘,’’, etc.) as the first several tokens, leading to inferior performance. We conjecture that is due to the fact that the position prediction module learns much faster than the word prediction module, and it quickly captures spurious correlations induced by a poorly initialized model. It is essential to balance the learning progress of these modules. To do so, we bootstrap learning by pre-training the model with a pre-defined order (e.g. L2R), before training with beam-searched orders.

4.3 Decoding

As for decoding, we directly follow Algorithm 1 to sample or decode greedily from the proposed model. However, in practice beam-search is important to explore the output space for neural autoregressive models. In our implementation, we perform beam-search for InDIGO as a two-step search. Suppose the beam size , at each step, we do beam-search for word prediction and then with the searched words, try out all possible positions and select the top- sub-sequences. In preliminary experiments, we also tried doing beam-search for word and positions simultaneously with their joint probability. However, it did not seem helpful.

5 Experiments

We evaluate InDIGO extensively on four challenging sequence generation tasks: word order recovery, machine translation, natural language to code generation (NL2Code, Ling et al., 2016) and image captioning. We compare our model trained with the pre-defined orders (the L2R order in default) and the adaptive orders obtained by beam-search.

5.1 Experimental Settings

Dataset

The machine translation experiments are conducted on three language pairs for studying how the decoding order influences the translation quality of languages with diversified characteristics: WMT’16 Romanian-English (Ro-En),222 http://www.statmt.org/wmt16/translation-task.html WMT 18 English-Turkish (En-Tr)333 http://www.statmt.org/wmt18/translation-task.html and KFTT English-Japanese (En-Ja, Neubig, 2011).444http://www.phontron.com/kftt/ The English part of the Ro-En dataset is used for the word order recovery task. For the NL2Code task, We use the Django dataset Oda et al. (2015)555 https://github.com/odashi/ase15-django-dataset and the MS COCO Lin et al. (2014) with the standard split Karpathy and Fei-Fei (2015) for the NL2Code task and image captioning, respectively. The dataset statistics can be found in Table 2.

Dataset Train Dev Test Length
WMT16 Ro-En 620k 2000 2000 26.48
WMT18 En-Tr 207k 3007 3000 25.81
KFTT En-Ja 405k 1166 1160 27.51
Django 16k 1000 1801 8.87
MS-COCO 567k 5000 5000 12.52
Table 2: Dataset statistics for the machine translation, code generation and image captioning tasks. Length represents the average number of tokens for target sentences of the training set.

Preprocessing

We apply the Moses tokenization666 https://github.com/moses-smt/mosesdecoder and normalization on all the text datasets except for codes. We perform joint BPE Sennrich et al. (2015) operations for the MT datasets, while using all the unique words as the vocabulary for NL2Code. For image captioning, we follow the same procedure as described by lee2018deterministic, where we use -dimensional image feature vectors (extracted from a pretrained ResNet-18 He et al. (2016)) as the input to the Transformer encoder. The image features are fixed during training.

Model WMT16 Ro En WMT18 En Tr KFTT En Ja
BLEU Ribes Meteor TER BLEU Ribes Meteor TER BLEU Ribes Meteor TER
RND 20.20 79.35 41.00 63.20 03.04 55.45 19.12 90.60 17.09 70.89 35.24 70.11
L2R 31.82 83.37 52.19 50.62 14.85 69.20 33.90 71.56 30.87 77.72 48.57 59.92
R2L 31.62 83.18 52.09 50.20 14.38 68.87 33.33 71.91 30.44 77.95 47.91 61.09
ODD 30.11 83.09 50.68 50.79 13.64 68.85 32.48 72.84 28.59 77.01 46.28 60.12
BLT 24.38 81.70 45.67 55.38 08.72 65.70 27.40 77.76 21.50 73.97 40.23 64.39
SYN 29.62 82.65 50.25 52.14 -- --
CF 30.25 83.22 50.71 50.72 12.04 67.61 31.18 74.75 28.91 77.06 46.46 61.56
RF 30.23 83.29 50.72 51.73 12.10 67.44 30.72 73.40 27.35 76.40 45.15 62.14
SAO 32.47 84.10 53.00 49.02 15.18 70.06 34.60 71.56 31.91 77.56 49.66 59.80
Table 3:

Results of translation experiments for three language pairs in different decoding orders. Scores are reported on the test set with four widely used evaluation metrics (BLEU

, Meteor, TER and Ribes). We do not report models trained with SYN order on En-Tr and En-Ja due to the lack of reliable dependency parsers. The statistical significance analysis6 between the outputs of SAO and L2R are conducted using BLEU score as the metric, and the p-values are for all three language pairs.

Models

We set , , , , , and throughout all the experiments. The source and target embedding matrices are shared except for En-Ja, as our preliminary experiments showed that keeping the embeddings not shared significantly improves the translation quality. Both the encoder and decoder use relative positions during self-attention except for the word order recovery experiments (where the position embedding is removed in the encoder, as there is no ground-truth position information in the input.) We do not introduce task-specific modules such as copying mechanism Gu et al. (2016) for model simplicity.

Figure 3: The BLEU scores on the test set for word order recovery with various decoding beam sizes.

Training

When training with the pre-defined orders, we reorder words of each training sequence in advance accordingly which provides supervision of the ground-truth positions that each word should be inserted. We test the pre-defined orders listed in Table 1. The SYN orders were generated according to the dependency parse obtained by a dependency parse parser777 https://spacy.io/usage/linguistic-features following a parent-to-children left-to-right order. The CF & RF orders are obtained based on vocabulary cut-off so that the number of common words and the number of rare words are approximately the same Ford et al. (2018). We also consider on-the-fly sampling a random order for each sentence as the baseline (RND). When using L2R as the pre-defined order, Transformer-InDIGO is almost equivalent to the vanilla Transformer, as the position prediction simply learns to predict the next position as the left of the symbol. The only difference is that it enhances the vanilla Transformer with a small number of additional parameters for the position prediction.

We also train Transformer-InDIGO using the searched adaptive order (SAO) where we set the beam size to . In default, models trained with SAO are bootstrapped from a slightly pre-trained (6,000 steps) model in L2R order.

Inference

During the test time, we do beam-search as described in Sec. 4.3. We observe from our preliminary experiments that models trained with different orders (either pre-defined or SAO) have very different optimal beam sizes for decoding. Therefore, we perform sensitivity studies, in which the beam sizes vary from and pick the beam size with the highest BLEU score on the validation set for each particular model.

5.2 Results and Analysis

Word Order Recovery

Word order recovery takes a bag of words as input and recovers its original word order, which is challenging as the search space is factorial. We do not restrict the vocabulary of the input words. We compare our model trained with the L2R order and eight searched adaptive orders (SAO) from beam search for word order recovery. The BLEU scores over various beam sizes are shown in Fig. 3. The model trained with SAO lead to higher BLEU scores over that trained with L2R with a gain up to BLEU scores. Furthermore, increasing the beam size brings more improvements for SAO compared to L2R, suggesting that InDIGO produces more diversified predictions so that it has a higher chance to recover the correct outputs.

Machine Translation

As shown in Table 3, we compare our model trained with pre-defined orders and the searched adaptive orders (SAO) with varying setups. We use four evaluation metrics including BLEU Papineni et al. (2002), Ribes Isozaki et al. (2010), Meteor Banerjee and Lavie (2005) and TER Snover et al. (2006) to avoid using a single metric that might be in favor of a particular generation order.

Most of the pre-defined orders (except for the random order and the balanced tree (BLT) order) perform reasonably well with InDIGO on the three language pairs. The best score is reached by the L2R order among the pre-defined orders except for En-Ja, where the R2L order works slightly better according to Ribes. This indicates that in machine translation, the monotonic orders are reasonable and reflect the languages. ODD, CF and RF show similar performance, which is below the L2R and R2L orders by around BLEU scores. The tree-based orders, such as the SYN and BLT orders do not perform well, indicating that predicting words following a syntactic path is not preferable. On the other hand, Table 3 shows that the model with SAO achieves competitive and even statistically significant improvements over the L2R order. The improvements are larger for Turkish and Japanese, which indicates that a flexible generation order may improve the translation quality for languages with different syntactic structures from English.

Table 4 shows the results of the ablation study for the searched order. SAO without bootstrapping nor beam-search with dropout degenerate by approximate BLEU score on Ro-En, demonstrating the effectiveness of these two methods.

Model Variants dev test
SAO default 33.60 32.47
    no bootstrap 32.86 31.88
    no bootstrap, no noise 32.64 31.72
Table 4: Ablation Study with SAO variants for machine translation on WMT16 Ro-En
Model Django MS-COCO
BLEU Accuracy BLEU CIDEr-D
L2R 36.74 13.6% 22.12 68.88
SAO 42.33 16.3% 22.58 69.42
Table 5: Results on the official test sets for both code generation and image captioning tasks.

Code Generation

The goal of this task is to generate Python code based on a natural language description, which can be achieved by using a standard sequence-to-sequence generation framework such as the proposed Transformer-InDIGO. As shown in Table 5, SAO works significantly better than the L2R order in terms of both BLEU and accuracy. This shows that flexible generation orders are more preferable in code generation.

Image Captioning

For the captioning task, one caption is generated per image and is compared against five human-created captions during testing. As show in Table 5, we observe that SAO obtains higher BLEU and CIDEr-D Vedantam et al. (2015) compared to the L2R order, and it implies that better captions are generated with different orders.

Figure 4: Examples randomly sampled from three tasks that are instructed to decode using InDIGO with various learned generation order. Words in red and underlined are the inserted token at each step. For visually convenience, we reordered all the partial sequences to its correct positions at each decoding step.

5.3 Case Study

We demonstrate how InDIGO works by uniformly sampling examples from the validation sets for machine translation (Ro-En), image captioning and code generation. As shown in Fig. 4, the proposed model generates sequences in different orders based on the order used for learning (either pre-defined or SAO). For instance, the model generates tokens approximately following the dependency parse wheb we used the SYN order for the machine translation task. On the other hand, the model trained using the RF order learns to first produce verbs and nouns first, before filling up the sequence with remaining functional words.

We observe several key characteristics about the inferred orders of SAO by analyzing the model’s output for each task: (1) For machine translation, the generation order of an output sequence does not deviate too much from L2R. Instead, the sequences are shuffled with chunks, and words within each chunk are generated in a L2R order; (2) In the examples of image captioning and code generation, the model tends to generate most of the words in the L2R order and insert a faw words afterward in certain locations. Moreover, we provide more examples in the appendix.

6 Related Work

Decoding for Neural Models

Neural autoregressive modelling has become one of the most successful approaches for generating sequences Sutskever et al. (2011); Mikolov (2012), which has been widely used in a range of applications, such as machine translation Sutskever et al. (2014), dialogue response generation Vinyals and Le (2015), image captioning Karpathy and Fei-Fei (2015) and speech recognition Chorowski et al. (2015). Another stream of work focuses on generating a sequence of tokens in a non-autoregressive fashion Gu et al. (2017); Lee et al. (2018); Oord et al. (2017), in which the discrete tokens are generated in parallel. Semi-autoregressive modelling Stern et al. (2018); Wang et al. (2018a) is a mixture of the two approaches, while largely adhering to left-to-right generation. Our method is radically different from these approaches as we support flexible generation orders, while preserving the dependencies among generated tokens.

Generation Orders

Previous studies on generation order of sequences mostly resort to a fixed set of generation orders. Wu et al. (2018) empirically show that R2L generation outperforms its L2R counterpart in a few tasks. Ford et al. (2018) devises a two-pass approach that produces partially-filled sentence ‘‘templates" and then fills in missing tokens. Zhu et al. (2019) also proposes to generate tokens by first predicting a text template and infill the sentence afterwards while in a more general way. Mehri and Sigal (2018) proposes a middle-out decoder that firstly predicts a middle-word and simultaneously expands the sequence in both directions afterwards. Another line of work models the probability of a sequence as a tree or directed graph Zhang et al. (2015); Dyer et al. (2016); Aharoni and Goldberg (2017); Wang et al. (2018b); Eriguchi et al. (2017). In contrast, Transformer-InDIGO supports fully flexible generation orders which is inferred during decoding.

There are two concurrent work  Welleck et al. (2019); Stern et al. (2019), which study sequence generation in a non-L2R order. welleck2019non propose a tree-like generation algorithm. Unlike this work, the tree-based generation order only produces a subset of all possible generation orders compared to our insertion-based models. Further, welleck2019non find L2R is superior to their learned orders on machine translation tasks, while transformer-InDIGO with searched adaptive orders achieves better performance. stern2019insertion propose a very similar idea of using insertion operations in Transformer for machine translation. The major difference is that they directly use absolute positions, while ours utilizes relative positions. As a result, their model needs to re-encode the partial sequence at every step, which is computationally more expensive. In contrast, our approach does not necessitate re-encoding the entire sentence during generation. In addition, knowledge distillation was necessary to achieve good performance instern2019insertion, while our model is able to match the performance of L2R even without bootstrapping.

7 Conclusion

We have presented a novel approach -- InDIGO -- which supports flexible sequence generation. Our model was trained with either pre-defined orders or searched adaptive orders. In contrast to conventional neural autoregressive models which often generate from left to right, our model can flexibly generate a sequence following an arbitrary order. Experiments show that our method achieved competitive or even better performance compared to the conventional left-to-right generation on four tasks, including machine translation, word order recovery, code generation and image captioning.

For future work, it is worth exploring training InDIGO using a trainable inference model to directly predict the permutation Mena et al. (2018) instead of beam-search. Also, the proposed InDIGO could be extended for post-editing tasks such as automatic post-editing for machine translation (APE) and grammatical error correction (GEC) by introducing additional operations such as ‘‘deletion’’ and ‘‘substitution’’.

References

Appendix

Figure 5: Two Examples randomly sampled from machine translation tasks of En-Tr and En-Ja on that are instructed to decode using InDIGO with L2R and SAO orders. Words in red and underlined are the inserted token at each step. For visually convenience, we reordered all the partial sequences to its correct positions at each decoding step.

Additional Examples

We present additional examples in Fig. 5 on translation task for En-Tr and En-Ja.