Semi-Autoregressive Neural Machine Translation

08/26/2018 ∙ by Chunqi Wang, et al. ∙ 0

Existing approaches to neural machine translation are typically autoregressive models. While these models attain state-of-the-art translation quality, they are suffering from low parallelizability and thus slow at decoding long sequences. In this paper, we propose a novel model for fast sequence generation --- the semi-autoregressive Transformer (SAT). The SAT keeps the autoregressive property in global but relieves in local and thus are able to produce multiple successive words in parallel at each time step. Experiments conducted on English-German and Chinese-English translation tasks show that the SAT achieves a good balance between translation quality and decoding speed. On WMT'14 English-German translation, the SAT achieves 5.58× speedup while maintaining 88% translation quality, significantly better than the previous non-autoregressive methods. When produces two words at each time step, the SAT is almost lossless (only 1% degeneration in BLEU score).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Neural networks have been successfully applied to a variety of tasks, including machine translation. The encoder-decoder architecture is the central idea of neural machine translation (NMT). The encoder first encodes a source-side sentence into hidden states and then the decoder generates the target-side sentence from the hidden states according to an autoregressive model

Recurrent neural networks (RNNs) are inherently good at processing sequential data. sutskever2014sequence,cho2014learning successfully applied RNNs to machine translation. bahdanau2014neural introduced attention mechanism into the encoder-decoder architecture and greatly improved NMT. GNMT Wu et al. (2016)

further improved NMT by a bunch of tricks including residual connection and reinforcement learning.

Figure 1: The different levels of autoregressive properties. Lines with arrow indicate dependencies. We mark the longest dependency path with bold red lines. The length of the longest dependency path decreases as we relieve the autoregressive property. An extreme case is non-autoregressive, where there is no dependency at all.

The sequential property of RNNs leads to its wide application in language processing. However, the property also hinders its parallelizability thus RNNs are slow to execute on modern hardware optimized for parallel execution. As a result, a number of more parallelizable sequence models were proposed such as ConvS2S Gehring et al. (2017) and the Transformer Vaswani et al. (2017). These models avoid the dependencies between different positions in each layer thus can be trained much faster than RNN based models. When inference, however, these models are still slow because of the autoregressive property.

A recent work Gu et al. (2017) proposed a non-autoregressive NMT model that generates all target-side words in parallel. While the parallelizability is greatly improved, the translation quality encounter much decrease. In this paper, we propose the semi-autoregressive Transformer (SAT) for faster sequence generation. Unlike gu2017non, the SAT is semi-autoregressive, which means it keeps the autoregressive property in global but relieves in local. As the result, the SAT can produce multiple successive words in parallel at each time step. Figure 1 gives an illustration of the different levels of autoregressive properties.

Experiments conducted on English-German and Chinese-English translation show that compared with non-autoregressive methods, the SAT achieves a better balance between translation quality and decoding speed. On WMT’14 English-German translation, the proposed SAT is 5.58 faster than the Transformer while maintaining 88% of translation quality. Besides, when produces two words at each time step, the SAT is almost lossless.

It is worth noting that although we apply the SAT to machine translation, it is not designed specifically for translation as gu2017non,lee2018deterministic. The SAT can also be applied to any other sequence generation task, such as summary generation and image caption generation.

2 Related Work

Almost all state-of-the-art NMT models are autoregressive Sutskever et al. (2014); Bahdanau et al. (2014); Wu et al. (2016); Gehring et al. (2017); Vaswani et al. (2017), meaning that the model generates words one by one and is not friendly to modern hardware optimized for parallel execution. A recent work Gu et al. (2017) attempts to accelerate generation by introducing a non-autoregressive model. Based on the Transformer Vaswani et al. (2017), they made lots of modifications. The most significant modification is that they avoid feeding the previously generated target words to the decoder, but instead feeding the source words, to predict the next target word. They also introduced a set of latent variables to model the fertilities

of source words to tackle the multimodality problem in translation. lee2018deterministic proposed another non-autoregressive sequence model based on iterative refinement. The model can be viewed as both a latent variable model and a conditional denoising autoencoder. They also proposed a learning algorithm that is hybrid of lower-bound maximization and reconstruction error minimization.

The most relevant to our proposed semi-autoregressive model is Kaiser et al. (2018). They first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. What we have in common with their idea is that we have not entirely abandoned autoregressive, but rather shortened the autoregressive path.

A related study on realistic speech synthesis is the parallel WaveNet Oord et al. (2017). The paper introduced probability density distillation, a new method for training a parallel feed-forward network from a trained WaveNet Van Den Oord et al. (2016) with no significant difference in quality.

There are also some work share a somehow simillar idea with our work: character-level NMT Chung et al. (2016); Lee et al. (2016) and chunk-based NMT Zhou et al. (2017); Ishiwatari et al. (2017). Unlike the SAT, these models are not able to produce multiple tokens (characters or words) each time step. oda2017neural proposed a bit-level decoder, where a word is represented by a binary code and each bit of the code can be predicted in parallel.

3 The Transformer

Since our proposed model is built upon the Transformer Vaswani et al. (2017), we will briefly introduce the Transformer. The Transformer uses an encoder-decoder architecture. We describe the encoder and decoder below.

3.1 The Encoder

From the source tokens, learned embeddings of dimension are generated which are then modified by an additive positional encoding. The positional encoding is necessary since the network does not leverage the order of the sequence by recurrence or convolution. The authors use additive encoding which is defined as:

where is the position of a word in the sentence and is the dimension. The authors chose this function because they hypothesized it would allow the model to learn to attend by relative positions easily. The encoded word embeddings are then used as input to the encoder which consists of blocks each containing two layers: (1) a multi-head attention layer, and (2) a position-wise feed-forward layer.

Multi-head attention builds upon scaled dot-product attention, which operates on a query Q, key K and value V:

where is the dimension of the key. The authors scale the dot product by to avoid the inputs to softmax function growing too large in magnitude. Multi-head attention computes different queries, keys and values with linear projections, computes scaled dot-product attention for each query, key and value, concatenates the results, and projects the concatenation with another linear projection:

in which and . The attention mechanism in the encoder performs attention over itself (), so it is also called self-attention.

The second component in each encoder block is a position-wise feed-forward layer defined as:

where , , , .

For more stable and faster convergence, residual connection He et al. (2016) is applied to each layer, followed by layer normalization Ba et al. (2016). For regularization, dropout Srivastava et al. (2014) are applied before residual connections.

Figure 2: The architecture of the Transformer, also of the SAT, where the red dashed boxes point out the different parts of these two models.

3.2 The Decoder

The decoder is similar with the encoder and is also composed by blocks. In addition to the two layers in each encoder block, the decoder inserts a third layer, which performs multi-head attention over the output of the encoder.

It is worth noting that, different from the encoder, the self-attention layer in the decoder must be masked with a causal mask, which is a lower triangular matrix, to ensure that the prediction for position can depend only on the known outputs at positions less than during training.

4 The Semi-Autoregressive Transformer

We propose a novel NMT model—the Semi-Autoregressive Transformer (SAT)—that can produce multiple successive words in parallel. As shown in Figure 2, the architecture of the SAT is almost the same as the Transformer, except some modifications in the decoder.

4.1 Group-Level Chain Rule

Standard NMT models usually factorize the joint probability of a word sequence

according to the word-level chain rule

resulting in decoding each word depending on all previous decoding results, thus hindering the parallelizability. In the SAT, we extend the standard word-level chain rule to the group-level chain rule.

We first divide the word sequence into consecutive groups

where denotes floor operation, is the group size, and also the indicator of parallelizability. The larger the , the higher the parallelizability. Except for the last group, all groups must contain words. Then comes the group-level chain rule

This group-level chain rule avoids the dependencies between consecutive words if they are in the same group. With group-level chain rule, the model no longer produce words one by one as the Transformer, but rather group by group. In next subsections, we will show how to implement the model in detail.

4.2 Long-Distance Prediction

In autoregressive models, to predict , the model should be fed with the previous word . We refer it as short-distance prediction. In the SAT, however, we feed to predict , to which we refer as long-distance prediction. At the beginning of decoding, we feed the model with special symbols to predict in parallel. Then are fed to the model to predict in parallel. This process will continue until a terminator is generated. Figure 3 gives illustrations for both short and long-distance prediction.

Figure 3: Short-distance prediction (top) and long-distance prediction (bottom).

4.3 Relaxed Causal Mask

In the Transformer decoder, the causal mask is a lower triangular matrix, which strictly prevents earlier decoding steps from peeping information from later steps. We denote it as strict causal mask. However, in the SAT decoder, strict causal mask is not a good choice. As described in the previous subsection, in long-distance prediction, the model predicts by feeding with . With strict causal mask, the model can only access to when predict , which is not reasonable since are already produced. It is better to allow the model to access to rather than only when predict .

Therefore, we use a coarse-grained lower triangular matrix as the causal mask that allows peeping later information in the same group. We refer to it as relaxed causal mask. Given the target length and the group size , relaxed causal mask and its elements are defined below:

For a more intuitive understanding, Figure 4 gives a comparison between strict and relaxed causal mask.

Figure 4: Strict causal mask (left) and relaxed causal mask (right) when the target length and the group size . We mark their differences in bold.

4.4 The SAT

Using group-level chain rule instead of word-level chain rule, long-distance prediction instead of short-distance prediction, and relaxed causal mask instead of strict causal mask, we successfully extended the Transformer to the SAT. The Transformer can be viewed as a special case of the SAT, when the group size = 1. The non-autoregressive Transformer (NAT) described in gu2017non can also be viewed as a special case of the SAT, when the group size is not less than maximum target length.

Table 1 gives the theoretical complexity and acceleration of the model. We list two search strategies separately: beam search and greedy search. Beam search is the most prevailing search strategy. However, it requires the decoder states to be updated once every word is generated, thus hinders the decoding parallelizability. When decode with greedy search, there is no such concern, therefore the parallelizability of the SAT can be maximized.

Model Complexity Acceleration
Transformer 1
SAT (beam search)
SAT (greedy search)
Table 1: Theoretical complexity and acceleration of the SAT. denotes the time consumed on the decoder network (calculating a distribution over the target vocabulary) each time step and denotes the time consumed on search (searching for top scores, expanding nodes and pruning). In practice, is usually much larger than since the network is deep.

5 Experiments

We evaluate the proposed SAT on English-German and Chinese-English translation tasks.

5.1 Experimental Settings

Datasets  For English-German translation, we choose the corpora provided by WMT 2014 Bojar et al. (2014). We use the newstest2013 dataset for development, and the newstest2014 dataset for test. For Chinese-English translation, the corpora we use is extracted from LDC111The corpora include LDC2002E18, LDC2003E14, LDC2004T08 and LDC2005T0.. We chose the NIST02 dataset for development, and the NIST03, NIST04 and NIST05 datasets for test. For English and German, we tokenized and segmented them into subword symbols using byte-pair encoding (BPE) Sennrich et al. (2015) to restrict the vocabulary size. As for Chinese, we segmented sentences into characters. For English-German translation, we use a shared source and target vocabulary. Table 2 summaries the two corpora.

Sentence Number Vocab Size
Source Target
EN-DE 4.5M 36K 36K
ZH-EN 1.8M 9K 34K
Table 2: Summary of the two corpora.
Model Beam Size BLEU Degeneration Latency Speedup
Transformer 4 27.11 0% 346ms 1.00
1 26.01 4% 283ms 1.22
Transformer, =2 4 24.30 10% 163ms 2.12
1 23.37 14% 113ms 3.06
NAT Gu et al. (2017) - 17.69 25% 39ms 15.6
NAT (rescroing 10) - 18.66 20% 79ms 7.68
NAT (rescroing 100) - 19.17 18% 257ms 2.36
LT Kaiser et al. (2018) - 19.80 27% 105ms -
LT (rescoring 10) - 21.00 23% - -
LT (rescoring 100) - 22.50 18% - -
IRNAT Lee et al. (2018) - 18.91 22% - 1.98
This Work
SAT, =2 4 26.90 1% 229ms 1.51
1 26.09 4% 167ms 2.07
SAT, =4 4 25.71 5% 149ms 2.32
1 24.67 9% 91ms 3.80
SAT, =6 4 24.83 8% 116ms 2.98
1 23.93 12% 62ms 5.58
Table 3:

Results on English-German translation. Latency is calculated on a single NVIDIA TITAN Xp without batching. For comparison, we also list results reported by gu2017non,kaiser2018fast,lee2018deterministic. Note that gu2017non,lee2018deterministic used PyTorch as their platform, but we and kaiser2018fast used TensorFlow. Even on the same platform, implementation and hardware may not exactly be the same. Therefore, it is not fair to directly compare BLEU and latency. A fairer way is to compare performance degradation and speedup, which are calculated based on their own baseline.

Baseline  We use the base Transformer model described in vaswani2017attention as the baseline, where . In addition, for comparison, we also prepared a lighter Transformer model, in which two encoder/decoder blocks are used ( = 2), and other hyper-parameters remain the same.

Hyperparameters

 Unless otherwise specified, all hyperparameters are inherited from the base Transformer model. We try three different settings of the group size

: = 2, = 4, and = 6. For English-German translation, we share the same weight matrix between the source and target embedding layers and the pre-softmax linear layer. For Chinese-English translation, we only share weights of the target embedding layer and the pre-softmax linear layer.

Search Strategies  We use two search strategies: beam search and greedy search. As mentioned in Section 4.4, these two strategies lead to different parallelizability. When beam size is set to 1, greedy search is used, otherwise, beam search is used.

Knowledge Distillation  Knowledge distillation Hinton et al. (2015); Kim and Rush (2016) describes a class of methods for training a smaller student network to perform better by learning from a larger teacher

network. For NMT, kim2016sequence proposed a sequence-level knowledge distillation method. In this work, we apply this method to train the SAT using a pre-trained autoregressive Transformer network. This method consists of three steps: (1) train an autoregressive Transformer network (the

teacher), (2) run beam search over the training set with this model and (3) train the SAT (the student) on this new created corpus.

Initialization  Since the SAT and the Transformer have only slight differences in their architecture (see Figure 2), in order to accelerate convergence, we use a pre-trained Transformer model to initialize some parameters in the SAT. These parameters include all parameters in the encoder, source and target word embeddings, and pre-softmax weights. Other parameters are initialized randomly. In addition to accelerating convergence, we find this method also slightly improves the translation quality.

Training  Same as vaswani2017attention, we train the SAT by minimize cross-entropy with label smoothing. The optimizer we use is Adam Kingma and Ba (2015) with , and . We change the learning rate during training using the learning rate funtion described in vaswani2017attention. All models are trained for 10K steps on 8 NVIDIA TITAN Xp with each minibatch consisting of about 30k tokens. For evaluation, we average last five checkpoints saved with an interval of 1000 training steps.

Evaluation Metrics  We evaluate the translation quality of the model using BLEU score Papineni et al. (2002).

Implementation  We implement the proposed SAT with TensorFlow Abadi et al. (2016). The code and resources needed for reproducing the results are released at https://github.com/chqiwang/sa-nmt.

5.2 Results on English-German

Table 3 summaries results of English-German translation. According to the results, the translation quality of the SAT gradually decreases as increases, which is consistent with intuition. When = 2, the SAT decodes 1.51 faster than the Transformer and is almost lossless in translation quality (only drops 0.21 BLEU score). With = 6, the SAT can achieve 2.98 speedup while the performance degeneration is only 8%.

When using greedy search, the acceleration becomes much more significant. When = 6, the decoding speed of the SAT can reach about of the Transformer while maintaining 88% of translation quality. Comparing with gu2017non,kaiser2018fast,lee2018deterministic, the SAT achieves a better balance between translation quality and decoding speed. Compared to the lighter Transformer ( = 2), with = 4, the SAT achieves a higher speedup with significantly better translation quality.

In a real production environment, it is often not to decode sentences one by one, but batch by batch. To investigate whether the SAT can accelerate decoding when decoding in batches, we test the decoding latency under different batch size settings. As shown in Table 4, the SAT significantly accelerates decoding even with a large batch size.

Model b=1 b=16 b=32 b=64
Transformer 346ms 58ms 53ms 56ms
SAT, =2 229ms 38ms 32ms 32ms
SAT, =4 149ms 24ms 21ms 20ms
SAT, =6 116ms 20ms 17ms 16ms
Table 4: Time needed to decode one sentence under various batch size settings. A single NVIDIA TIAN Xp is used in this test.

It is also good to know if the SAT can still accelerate decoding on CPU device that does not support parallel execution as well as GPU. Results in Table 5 show that even on CPU device, the SAT can still accelerate decoding significantly.

Model =1 =2 =4 =6
Latency 1384ms 607ms 502ms 372ms
Table 5: Time needed to decode one sentence on CPU device. Sentences are decoded one by one without batching. =1 denotes the Transformer.
Model Beam Size BLEU Degeneration Lattency Speedup
NIST03 NIST04 NIST05 Averaged
Transformer 4 40.74 40.54 40.48 40.59 0% 410ms 1.00
1 39.56 39.72 39.61 39.63 2% 302ms 1.36
Transformer, =2 4 37.30 38.55 36.87 37.57 7% 169ms 2.43
1 36.26 37.19 35.50 36.32 11% 117ms 3.50
This Work
SAT, =2 4 39.13 40.04 39.55 39.57 3% 243ms 1.69
1 37.94 38.73 38.43 38.37 5% 176ms 2.33
SAT, =4 4 37.08 38.06 37.12 37.42 8% 152ms 2.70
1 35.77 36.43 35.04 35.75 12% 94ms 4.36
SAT, =6 4 34.61 36.29 35.06 35.32 13% 129ms 3.18
1 33.44 34.54 33.28 33.75 17% 64ms 6.41
Table 6: Results on Chinese-English translation. Latency is calculated on NIST02.

5.3 Results on Chinese-English

Table 6 summaries results on Chinese-English translation. With = 2, the SAT decodes 1.69 while maintaining 97% of the translation quality. In an extreme setting where = 6 and beam size = 1, the SAT can achieve 6.41 speedup while maintaining 83% of the translation quality.

5.4 Analysis

Effects of Knowledge Distillation  As shown in Figure 5, sequence-level knowledge distillation is very effective for training the SAT. For larger , the effect is more significant. This phenomenon is echoing with observations by gu2017non,oord2017parallel,lee2018deterministic. In addition, we tried word-level knowledge distillation Kim and Rush (2016) but only a slight improvement was observed.

Figure 5: Performance of the SAT with and without sequence-level knowledge distillation.

Position-Wise Cross-Entropy  In Figure 6, we plot position-wise cross-entropy for various models. To compare with the baseline model, the results in the figure are from models trained on the original corpora, i.e., without knowledge distillation. As shown in the figure, position-wise cross-entropy has an apparent periodicity with a period of . For positions in the same group, the position-wise cross-entropy increase monotonously, which indicates that the long-distance dependencies are always more difficult to model than short ones. It suggests the key to further improve the SAT is to improve the ability of modeling long-distance dependencies.

Figure 6: Position-wise cross-entropy for various models on English-German translation.
Table 7: Three sample Chinese-English translations by the SAT and the Transformer. We mark repeated words or phrases by red font and underline.

Case Study  Table 7 lists three sample Chinese-English translations from the development set. As shown in the table, even when produces = 6 words at each time step, the model can still generate fluent sentences. As reported by gu2017non, instances of repeated words or phrases are most prevalent in their non-autoregressive model. In the SAT, this is also the case. This suggests that we may be able to improve the translation quality of the SAT by reducing the similarity of the output distribution of adjacent positions.

6 Conclusion

In this work, we have introduced a novel model for faster sequence generation based on the Transformer Vaswani et al. (2017), which we refer to as the semi-autoregressive Transformer (SAT). Combining the original Transformer with group-level chain rule, long-distance prediction and relaxed causal mask, the SAT can produce multiple consecutive words at each time step, thus speedup decoding significantly. We conducted experiments on English-German and Chinese-English translation. Compared with previously proposed non-autoregressive models Gu et al. (2017); Lee et al. (2018); Kaiser et al. (2018), the SAT achieves a better balance between translation quality and decoding speed. On WMT’14 English-German translation, the SAT achieves 5.58 speedup while maintaining 88% translation quality, significantly better than previous methods. When produces two words at each time step, the SAT is almost lossless (only 1% degeneration in BLEU score).

In the future, we plan to investigate better methods for training the SAT to further shrink the performance gap between the SAT and the Transformer. Specifically, we believe that the following two directions are worth study. First, use object function beyond maximum likelihood to improve the modeling of long-distance dependencies. Second, explore new method for knowledge distillation. We also plan to extend the SAT to allow the use of different group sizes at different positions, instead of using a fixed value.

Acknowledgments

We would like to thank the anonymous reviewers for their valuable comments. We also thank Wenfu Wang, Hao Wang for helpful discussion and Linhao Dong, Jinghao Niu for their help in paper writting.

References