Neural machine translation (NMT) (sutskever2014sequence; bahdanau2014neural) has achieved impressive performance in recent years, but the autoregressive decoding process limits the translation speed and restricts low-latency applications. To mitigate this issue, many non-autoregressive (NAR) translation methods have been proposed, including latent space models (gu2017non; ma2019flowseq; shu2019latent), iterative refinement methods (lee2018deterministic; ghazvininejad2019mask)
, and alternative loss functions(libovicky2018end; wang2019non; wei2019imitation; li2019hint; shao2019minimizing). The decoding speedup for NAR models is typically 2-15 depending on the specific setup (e.g., the number of length candidates, number of latent samples, etc.), and NAR models can be tuned to achieve different trade-offs between time complexity and decoding quality (gu2017non; wei2019imitation; ghazvininejad2019mask; ma2019flowseq).
Although different in various aspects, all of these methods are based on transformer modules (vaswani2017attention), and depend on a well-trained AR model to obtain its output translations to create targets for NAR model training. This training setup is well-suited to leverage external monolingual data, since the target side of the NAR training corpus is always generated by an AR model. Techniques like backtranslation (sennrich2015improving) are known to improve MT performance using monolingual data alone. However, to the best of our knowledge, monolingual data augmentation for NAR-MT has not been reported in the literature.
In typical NAR-MT model training, an AR teacher provides a consistent supervision signal for the NAR model; the source text that was used to train the teacher is decoded by the teacher to create synthetic target text. In this work, we use a large amount of source text from monolingual corpora to generate additional teacher outputs for NAR-MT training.
We use a transformer model with minor structural changes to perform NAR generation in a non-iterative way, which establishes stronger baselines than most of the previous methods. We demonstrate that generating additional training data with monolingual corpora consistently improves the translation quality of our baseline NAR system on the WMT14 En-De and WMT16 En-Ro translation tasks. Furthermore, our experiments show that NAR models trained with increasing amount of extra monolingual data are less prone to overfitting and generalize better on longer sentences.
In addition, we have obtained RoEn and EnDe results which are state-of-the-art for non-iterative NAR-MT, just by using more monolingual data.
2.1 Basic Approach
Most of the previous methods treat the NAR modeling objective as a product of independent token probabilities(gu2017non), but we adopt a different point of view by simply treating the NAR model as a function approximator of an existing AR model.
Given an AR model and a source sentence, the translation process of the greedy output111By ‘greedy’, we mean decoding with a beam width of 1.
of the AR model is a complex but deterministic function. Since the neural networks can be near-perfect non-linear function approximators(liang2016deep), we can expect an NAR model to learn the AR translation process quite well, as long as the model has enough capacity. In particular, we first obtain the greedy output of a trained AR model, and use the resulting paired data to train the NAR model. Other papers on NAR-MT (gu2017non; lee2018deterministic; ghazvininejad2019mask) have used AR teacher models to generate training data, and this is a form of sequence-level knowledge distillation (kim2016sequence).
|Parallel||En Mono.||Non-En Mono.|
2.2 Model Structure
Throughout this paper, we focus on non-iterative NAR methods. We use standard transformer structures with a few small changes for NAR-MT, which we describe below.
For the target side input, most of the previous work simply copied the source side as the decoder’s input. We propose a soft copying method by using a Gaussian kernel to smooth the encoded source sentence embeddings . Suppose the source and target lengths are and respectively. Then the -th input token for the decoder is , where
is the Gaussian distribution evaluated atwith mean
and variance. ( is a learned parameter.)
We modify the attention mask so that it does not mask out the future tokens, and every token is dependent on both its preceding and succeeding tokens in every layer.
gu2017non, lee2018deterministic, li2019hint and wang2019non use an additional positional self-attention module in each of the decoder layers, but we do not apply such a layer. It did not provide a clear performance improvement in our experiments, and we wanted to reduce the number of deviations from the base transformer structure. Instead, we add positional embeddings at each decoder layer.
2.3 Length Prediction
We use a simple method to select the target length for NAR generation at test time (wang2019non; li2019hint), where we set the target length to be , where
is a constant term estimated from the parallel data andis the length of the source sentence. We then create a list of candidate target lengths ranging from where is the half-width of the interval. For example, if , and we used a half-width of , then we would generate NAR translations of length , for a total of 5 candidates. These translation candidates would then be ranked by the AR teacher to select the one with the highest probability. This is referred to as length-parallel decoding in wei2019imitation.
3 NAR-MT with Monolingual Data
Augmenting the NAR training corpus with monolingual data provides some potential benefits. Firstly, we allow more data to be translated by the AR teacher, so the NAR model can see more of the AR translation outputs than in the original training data, which helps the NAR model generalize better. Secondly, there is much more monolingual data than parallel data, especially for low-resource languages.
Incorporating monolingual data for NAR-MT is straightforward in our setup. Given an AR model that we want to approximate, we obtain the source-side monolingual text and use the AR model to generate the targets that we can train our NAR model on.
|NAT-FT (+NPD s=10)||29.02||30.76||18.66||22.41|
|NAT-FT (+NPD s=100)||29.79||31.44||19.17||23.20|
|NAT-IR (=1) lee2018deterministic||24.45||25.73||13.91||16.77|
|FlowSeq (NPD n=30)||32.20||32.84||25.31||30.68|
|Our AR Transformer (beam 1)||33.56||33.68||28.84||32.77|
|Our AR Transformer (beam 4)||34.50||34.01||29.65||33.65|
|Our NAR baseline (=5)||31.21||32.06||23.57||29.01|
|+ monolingual data||31.91||33.46||25.53||29.96|
|+ monolingual data and de-dup||31.96||33.57||25.73||30.18|
4 Experimental Setup
We evaluate NAR-MT training on both the WMT16 En-Ro (around 610k sentence pairs) and the WMT14 En-De (around 4.5M sentence pairs) parallel corpora along with the associated WMT monolingual corpora for each language. For the parallel data, we use the processed data from lee2018deterministic to be consistent with previous publications. The WMT16 En-Ro task uses newsdev-2016 and newstest-2016 as development and test sets, and the WMT14 En-De task uses newstest-2013 and newstest-2014 as development and test sets. We report all results on test sets. We used the Romanian portion of the News Crawl 2015 corpus and the English portion of the Europarl v7/v8 corpus222http://www.statmt.org/wmt16/translation-task.html as monolingual text for our En-Ro experiments, which are both about 4 times larger than the original paired data. We used the News Crawl 2007/2008 corpora for German and English monolingual text2 in our En-De experiments, and downsampled them to million sentences per language. The data statistics are summarized in Table 1. The monolingual data are processed following lee2018deterministic, which are tokenized and segmented into subword units (sennrich2015neural). The vocabulary is shared between source and target languages and has k units. We use BLEU to evaluate the translation quality333We report tokenized BLEU scores in line with prior work (lee2018deterministic; ma2019flowseq), which are case-insensitive for WMT16 En-Ro and case-sensitive for WMT14 En-De in the data provided by lee2018deterministic..
We use the settings for the base transformer configuration in vaswani2017attention for all the models: 6 layers per stack, 8 attention heads per layer, 512 model dimensions and 2048 hidden dimensions. The AR and NAR model have the same encoder-decoder structure, except for the decoder attention mask and the decoding input for the NAR model as described in Sec. 2.2.
Training and Inference
We initialize the NAR embedding layer and encoder parameters with the AR model’s. The NAR model is trained with the AR model’s greedy outputs as targets. We use the Adam optimizer, with batches of size 64k tokens for one gradient update, and the learning rate schedule is the same as the one in vaswani2017attention
, where we use 4,000 warm-up steps and the maximum learning rate is around 0.0014. We stop training when there is no further improvement in the last 5 epochs, and training finishes in 30 epochs for AR models and 50 epochs for NAR models, except for the En-De experiments with monolingual data where we train for 35 epochs to roughly match the number of parameter updating steps without using extra monolingual data (k steps). We average the last 5 checkpoints to obtain the final model. We train the NAR model with cross-entropy loss and label smoothing (). During inference time, we use length parallel decoding with , and evaluate the BLEU scores on the reference sentences. All the models are implemented with MXNet and GluonNLP (gluoncvnlp2019). We used 4 NVIDIA V100 GPUs for training, which takes about a day for an AR model and up to a week for an NAR model depending on the data size, and testing is performed on a single GPU.
5 Results and Analysis
We present our BLEU scores alongside the scores of other non-iterative methods in Table 2
. Our baseline results surpass many of the previous results, which we attribute to the way that we initialize the decoding process. Instead of directly copying the source embeddings to the decoder input, we use an interpolated version of the encoder outputs as the decoder input, which allows the encoder to transform the source embeddings into a more usable form. Note that a similar technique is adopted inwei2019imitation, but our model structure and optimization are much simpler as we do not have any imitation module for detailed teacher guidance.
Our results confirm that the use of monolingual data improves the NAR model’s performance. By incorporating all of the monolingual data for the En-Ro NAR-MT task, we see a gain of 0.70 BLEU points for the EnRo direction and 1.40 for the RoEn direction. Similarly, we also see significant gains in the En-De NAR-MT task, with an increase of 1.96 BLEU points for the EnDe direction and 0.95 for the DeEn direction.
By removing the duplicated output tokens as a simple postprocessing step (following lee2018deterministic), we achieved 33.57 BLEU for the WMT16 RoEn direction and 25.73 BLEU for the WMT14 EnDe direction, which are state-of-the-art among non-iterative NAR-MT results. In addition, our work shrinks the gap between the AR teacher and the NAR model to just 0.11 BLEU points in the RoEn direction.
Losses in Training and Evaluation
To further investigate how much the monolingual data contributes to BLEU improvements, we train En-Ro NAR models with 0%, 25%, 50%, and 100% of the monolingual corpora and plot the cross-entropy loss on the training data and the testing data for the converged model. In Figure 1, when no monolingual data is used, the training loss typically converges to a lower point compared to the loss on the testing set, which is not the case for the AR model where the validation and testing losses are usually lower than the training loss. This indicates that the NAR model overfits to the training data, which hinders its generalization ability. However, as more monolingual data is added to the training recipe, the overfitting problem is reduced and the gap between the evaluation and training losses shrinks.
Effect of Length-Parallel Decoding
To test how the NAR model performance and the monolingual gains are affected by the number of decoding length candidates, we vary the half-width (Sec. 2.3) across a range of values and test the NAR models trained with 0%, 50%, and 100% of the monolingual data for the En-Ro task (Table 3). The table shows that having multiple length candidates can increase the BLEU score significantly and can be better than using the gold target length, but having too many length candidates can hurt the performance and slow down decoding (in our case, the optimal is 5). Nonetheless, for every value of , the BLEU score consistently increases when monolingual data is used, and more data brings greater gains.
BLEU under Different Sentence Lengths
In Table 4, we present the BLEU scores on WMT16 RoEn test sentences grouped by source sentence lengths. We can see that the baseline NAR model’s performance drops quickly as sentence length increases, whereas the NAR model trained with monolingual data degrades less over longer sentences, which demonstrates that external monolingual data improves the NAR model’s generalization ability.
We found that monolingual data augmentation reduces overfitting and improves the translation quality of NAR-MT models. We note that the monolingual corpora are derived from domains which may be different from those of the parallel training data or evaluation sets, and a mismatch can affect NAR translation performance. Other work in NMT has examined this issue in the context of backtranslation (e.g., edunov2018understanding), and we expect the conclusions to be similar in the NAR-MT case.
There are several open questions to investigate: Are the benefits of monolingual data orthogonal to other techniques like iterative refinement? Can the NAR model perfectly recover the AR model’s performance with much larger monolingual datasets? Are the observed improvements language-dependent? We will consider these research directions in future work.