1 Introduction
Neural machine translation (NMT) Bahdanau et al. (2014); Vaswani et al. (2017) systems make use of backtranslation Sennrich et al. (2016a) to leverage monolingual data during the training. Here an inverse, targettosource, translation model generates synthetic source sentences, by translating a target monolingual corpus, which are then jointly used as bilingual data.
Samplingbased synthetic data generation schemes were recently shown to outperform beam search Edunov et al. (2018); Imamura et al. (2018). However, the generated corpora are reported to stray away from the distribution of natural data Edunov et al. (2018). In this work, we focus on investigating why sampling creates better training data by rewriting the loss criterion of an NMT model to include a modelbased data generator. By doing so, we obtain a deeper understanding of synthetic data generation methods, identifying their desirable properties and clarifying the practical approximations.
In addition, current stateoftheart NMT models suffer from probability smearing issues
Ott et al. (2018) and are trained using label smoothing Pereyra et al. (2017). These result in lowquality sampled sentences, which influence the synthetic corpora. We investigate considering only highquality hypotheses by restricting the search space of the model via (i) ignoring words under a probability threshold during sampling and (ii) best list sampling.We validate our claims in experiments on a controlled scenario derived from the WMT 2018 German English translation task, which allows us to directly compare the properties of synthetic and natural corpora. Further, we present the proposed sampling techniques on the original WMT German English task. The experiments show that our restricted sampling techniques work comparable or superior to other generation methods by imitating humangenerated data better. In terms of translation quality, these do not result in consistent improvements over the typical beam search strategy.
2 Related Work
sennrich2016improving introduce the backtranslation technique for NMT and show that the quality of the backtranslation model, and therefore resulting pseudocorpus, has a positive effect on the quality of the subsequent sourcetotarget model. These findings are further investigated in Hoang et al. (2018); Burlot and Yvon (2018) where the authors confirm work effect. In our work, we expand upon this concept by arguing that the quality of the resulting model not only depends on the data fitness of the backtranslation model but also on how sentences are generated from it.
cotterell2018explaining frame backtranslation as a variational process, with the space of source sentences as the latent space. Their approach argues that the distribution of the synthetic data generator and the true translation probability should match. Thus it is invaluable to clarify and investigate the sampling distributions that current stateoftheart data generation techniques utilize. A simple property is that a target sentence must be allowed to be aligned to multiple source sentences during the training phase. Several efforts Hoang et al. (2018); Edunov et al. (2018); Imamura et al. (2018) confirm that this is in fact beneficial. Here, we unify these findings by rewriting the optimization criterion of NMT models to depend on a data generator, which we define for beam search, sampling and best list sampling approaches.
3 How BackTranslation Fits in NMT
In NMT, one is interested in translating a source sentence into a target sentence . For this purpose, the translation process is modelled via a neural model with parameters .
The optimal optimization criterion of an NMT model requires access to the true joint distribution of source and target sentence pairs
. This is approximated by the empirical distribution derived from a bilingual dataset . The model parameters are trained to minimize the crossentropy, normalized over the number of target tokens, over the same.(1)  
(2)  
(3) 
Target monolingual data can be included by generating a pseudoparallel source corpus via, e.g. backtranslation or samplingbased methods. In this section, we describe such generators as a component of the optimization criterion of NMT models and discuss approximations made in practice.
3.1 Derivation of the Generation Criterion
Eq. 1 is the starting point of our derivation in Eqs. 46. can be decomposed into the true language probability and true translation probability . These two probabilities highlight the assumptions in the scenario of backtranslation: we have access to an empirical target distribution with which is approximated, derived from the monolingual corpus . However, one lacks access to . Generating synthetic data is essentially the approximation of the true probability of . It can be described as a sampling distribution^{1}^{1}1
The properties of a probability distribution hold for
. parameterized by the targettosource model .(4)  
(5)  
(6) 
This derivation highlights an apparent condition that the generation procedure should result in a distribution of source sentences similar to the true data distribution . cotterell2018explaining show a similar derivation hinting towards an iterative wakesleep variational scheme Hinton et al. (1995), which reaches similar conclusions.
Following this, we formulate two issues with the backtranslation approach: (i) the choice of generation procedure and (ii) the adequacy of the targettosource model . The search method is responsible not only for controlling the output of source sentences but also to offset the deficiencies of the targettosource model .
An implementation for is, for example, beam search where is a deterministic sampling procedure, which returns the highest scoring sentence according to the search criterion:
(7) 
Sampling as described by edunov2018understanding would be simply the equality
(8) 
3.2 Approximations
Applications of backtranslation and its variants largely follows the initial approach presented in Sennrich et al. (2016a). Each target authentic sentence is aligned to a single synthetic source sentence. This new dataset is then used as if it were bilingual. This section is dedicated to the clarification of the effect of such a strategy in the optimization criterion, especially with nondeterministic sampling approaches Edunov et al. (2018); Imamura et al. (2018).
Firstly, the sum over all possible source sentences in Eq. 6 is approximated by a restricted search space of sentences, with being a common choice. Yet, the cost of generating the data and training on the same scales linearly with and it is unattractive to choose higher values.
Secondly, the pseudocorpora are static across training, i.e. the synthetic sentences do not change across training epochs, which appears to cancel out the benefits of samplingbased methods. Correcting this behaviour requires an onthefly sentence generation, which increases the complexity of the implementation and slows down training considerably. Backtranslation is not affected by this approximation since the targettosource model always generates the same translation.
The approximations are shown in Eq. 9 with a fixed pseudoparallel corpus where is aligned to source sentences .
(9) 
We hypothesize that these conditions become less problematic when large amounts of monolingual data are present due to the law of large numbers, which states that repeated occurrences of the same sentence
will lead to a representative distribution of source sentences according to . In other words, given a high number of representative target samples, Eq. 9 matches Eq. 6 with . This shifts the focus of the problem to find an appropriate search method and generator .4 Improving Synthetic Data
In this section, we discuss how the known generation methods fail in approximating due to modelling issues of model and consider how the generation approach can be adapted to compensate .
We base our remaining work on the approximations presented in Section 3.2 and consider synthetic sentences. The reasoning for this is twofold: (i) it is the most attractive scenario in terms of computational costs and (ii) the approximations lose their influence with large target monolingual corpora.
4.1 Issues in Translation Modelling
With samplingbased approaches, one does not only care about whether highquality sentences get assigned a high probability, but also that lowquality sentences are assigned a low probability.
Label smoothing (LS) Pereyra et al. (2017) is a common component of stateoftheart NMT systems Ott et al. (2018). This teaches the model to (partially) fit a uniform word distribution, causing unrestricted sampling to periodically sample from the same. Even without LS, NMT models tend to smear their probability to lowquality hypotheses Ott et al. (2018).
To showcase the extent of this effect, we provide the average cumulative probabilities of top words for NMT models, see Section 5.2, trained with and without label smoothing in Figure 1. The distributions are created on the development corpus. We observe that training a model with label smoothing causes a reallocation of roughly 7% probability mass to all except the top100 words. This reallocation is not problematic during beam search, since this strategy only looks at the topscoring candidates. However, when considering sampling for data generation, there is a high likelihood that one will sample from the space of low probability words, creating nonparallel outputs, see Table 4.
4.2 Restricting the Search Space
Changing the search approach is less arduous than changing the model since it does not involve retraining the model. Restricting the search space to highprobability sentences avoids the issues highlighted in Section 4.1 and provides a middleground between unrestricted sampling and beam search.
edunov2018understanding consider topk sampling to avoid the aforementioned problem, however, there is no guarantee that the candidates are confident predictions. We propose two alternative methods: (i) restrict the sampling outputs to words with a minimum probability and (ii) weighted sampling from the best candidates.
4.2.1 Restricted Sampling
The first approach follows sampling directly from the model at each position , but only taking words with at least probability into account. Afterwards, another softmax activation^{2}^{2}2Alternatively an L1normalization would be sufficient. is performed only over these words by masking all the remaining ones with large negative values. If no words have over probability, then the maximum probability word is chosen. Note that a large gets closer to greedy search () and a lower value gets near to unrestricted sampling.
(10)  
with being the subset of words of the source vocabulary with at least probability:
(11) 
and being a softmax normalization restricted to the elements in .
4.2.2 best List Sampling
The second approach involves generating a list of best candidates, normalizing the output scores with a softmax operation, as in Section 4.2.1, and finally sampling a hypothesis.
The score of a translation is abbreviated by .
(12)  
with being the set of best translations found by the targettosource model and being the set of all source sentences:
(13) 
5 Experiments
5.1 Setup
This section makes use of the WMT 2018 German English ^{3}^{3}3http://www.statmt.org/wmt18/translationtask.html news translation task, consisting of 5.9M bilingual sentences. The German and English monolingual data is subsampled from the deduplicated NewsCrawl2017 corpus. In total 4M sentences are used for German and English monolingual data. All data is tokenized, truecased and then preprocessed with joint byte pair encoding Sennrich et al. (2016b)^{4}^{4}450k merge operations and a vocabulary threshold of 50 are used..
We train Base Transformer Vaswani et al. (2017) models using the Sockeye toolkit Hieber et al. (2017). Optimization is done with Adam Kingma and Ba (2014) with a learning rate of 3e4, multiplied with 0.7 after every third 20kupdate checkpoint without improvements in development set perplexity. In Sections 5.2 and 5.3, word batch sizes of 16k and 4k are used respectively. Inference uses a beam size of and applies hypothesis length normalization.
Casesensitive Bleu Papineni et al. (2002) is computed using the mteval_13a.pl script from Moses Koehn et al. (2007). Model selection is performed based on the Bleu performance on newstest2015. All experiments were performed using the workflow manager Sisyphus Peter et al. (2018). We report the statistical significance of our results with MultEval Clark et al. (2011). A low pvalue indicates that the performance gap between two systems is likely to hold given a different sample of a random process, e.g. an initialization seed.
test2015  test2017  test2018  
beam search  40.1  
sampling  
w/o LS  
39.8  
50best sampling  39.8  
reference  32.6  33.5  40.0 
denotes a pvalue of w.r.t. the reference.
5.2 Controlled Scenario
To compare the performance of each generation method to natural sentences, we shuffle and split the German English bilingual data into 1M bilingual sentences and 4.9M monolingual sentences. This gives us a reference translation for each sentence and eliminates domain adaptation effects. The generator model is trained on the smaller corpus until convergence on Bleu, roughly 100k updates. The final sourcetotarget model is trained from scratch on the concatenated synthetic and natural corpora until convergence on Bleu, roughly 250k updates for all variants.
Table 1 showcases the translation quality of the models trained on different kinds of synthetic corpora. Contrary to the observations in edunov2018understanding, unrestricted sampling does not outperform beam search and once the search space is restricted all methods perform similarly well.
To further investigate this, we look at other relevant statistics of a generated corpus and the performance of the subsequent models in Table 2. These are the perplexities (Ppl) of the model on the training and development data and the entropy of a targettosource IBM1 model Brown et al. (1993) trained with GIZA++ Och and Ney (2003). The training set Ppl varies strongly with each generation method since each produces hypotheses of differing quality. All methods with a restricted search space have a larger translation entropy and smaller training Ppl than natural data. This is due to the sentences being less noisy and also the translation options being less varied. Unrestricted sampling seems to overshoot the statistics of natural data, attaining higher entropy values.
However, once LS is removed, the best Ppl on the development set is reached and the remaining statistics match the natural data very closely. Nevertheless, the performance in Bleu lags behind the methods that consider highquality hypotheses as reported in Table 1. Looking further into the models, we notice that when trained on corpora with more variability, i.e. larger translation entropy, the probability distributions are flatter. We explain the better dev perplexities with unrestricted sampling with the same reason for which label smoothing is helpful: it makes the model less biased towards more common events Ott et al. (2018). This uncertainty is, however, not beneficial for translation performance.
Entropy  Ppl  
En De  Train  test2015  
beam search  2.60  2.74  5.77 
sampling  3.13  9.07  5.55 
w/o LS  2.93  5.17  5.31 
2.66  3.34  5.61  
50best sampling  2.62  2.84  5.70 
reference  2.91  5.18  4.50 
5.3 Realworld Scenario
Previously, we applied different synthetic data generation methods to a controlled scenario for the purpose of investigation. We extend the experiments to the original WMT 2018 German English task and showcase the results in Table 3. In contrast to the experiments of Section 5.2, the distribution of the monolingual data now differs from the bilingual data. The models are trained on the bilingual data for 1M updates and then finetuned for further 1M updates on the concatenated bilingual and synthetic corpora.
The restricted sampling techniques perform comparable to or better than the other synthetic data generation methods in all cases. Especially on English German, unrestricted sampling only produces statistical significant improvements over beam search when LS is replaced. Furthermore, restricting the search space via best list sampling improves significantly in both test sets.
We observe that on German English newstest2018 particularly, there is a large drop in performance when using unrestricted sampling. This is slightly alleviated by applying a minimum probability threshold of , but there is still a gap to be closed. This behaviour is investigated in the following section.
5.3.1 Scalability
A benefit of nondeterministic generation methods is the scalability in contrast to beam search. Under the assumption of a good fitting translation model, as argued in Section 3, sampling does appear to be the best option.
We compare different monolingual corpus sizes for the German English task in Figure 2 on three different test sets. Particularly, newstest2018 shows the exact opposite behaviour from the remaining test sets: the amount of data generated via beam search improves the resulting model, whereas sampling improves the system by a small margin. Normal sampling has a general tendency to perform better with more data, but it saturates in two test sets (newstest2015 and newstest2018). Restricted sampling appears to be the most consistent approach, always outperforming unrestricted sampling and also always scaling with a larger set of monolingual data.
These observations are strongly linked to the properties of current stateoftheart models, see Section 4.1 and experimental setup via e.g. the domain of the bilingual, monolingual and test data. Therefore, the high performance scaling with beam search in newstest2018 might be due to its relatedness to the training data as measured by the high Bleu values attained in inference.
De En  En De  
test2017  test2018  test2017  test2018  
beam search  35.7  43.6  28.2  41.3 
sampling  35.8  28.6  41.5  
w/o LS  35.9  41.7  
35.9  41.6  
best samp.  36.0  43.6 
denotes a pvalue of w.r.t. beam search.
5.4 Synthetic Source Examples
To highlight the issues present in unrestricted sampling, we compare the outputs of different generation methods in Table 4. The unrestricted sampling output hypothesizes a second sentence which is not related at all to the input sentence but generates a much longer sequence. The restricted sampling methods and the model trained without label smoothing provide an accurate translation of the input sentence. Compared to the beam search hypothesis, they have a reasonable variation which is indeed closer to the humantranslated reference.
source  it is seen as a long sag@@ a full of surprises . 

beam search  es wird als eine lange Geschichte voller Überraschungen angesehen . 
sampling  es wird als eine lange S@@ aga voller Überraschungen angesehen . injury , Skepsis , Feuer ) , Duschen verursach@@ ter Körper , Pal@@ ä@@ ste , Gol@@ fen , Flu@@ r und Mu@@ ffen , Diesel@@  Total Bab@@ ylon , der durch@@ s Wasser und Wasser@@ kraft fließt . 
w/o label smoothing  es wurde als eine lange Geschichte voller Überraschungen gesehen . 
es wird als lange S@@ age voller Überraschungen angesehen .  
50best sampling  es wird als eine lange S@@ age voller Überraschungen gesehen . 
reference  er wird als eine lange S@@ aga voller Überraschungen angesehen . 
6 Conclusion
In this work, we link the optimization criterion of an NMT model with a synthetic data generator defined for both beam search and additional samplingbased methods. By doing so, we identify that the search method plays an important role, as it is responsible for offsetting the shortcomings of the generator model. Specifically, label smoothing and probability smearing issues cause samplingbased methods to generate unnatural sentences.
We analyze the performance of our techniques on a closed and opendomain of the WMT 2018 German English news translation task. We provide qualitative and quantitative evidence of the detrimental behaviours and show that these can be influenced by retraining the generator model without label smoothing or by restricting the search space to not consider lowprobability outputs. In terms of translation quality, sampling from best lists outperforms beam search, albeit at a higher computational cost. Restricted sampling or the disabling of label smoothing for the generator model are shown to be costeffective ways of improving upon the unrestricted sampling approach of edunov2018understanding.
Acknowledgments
This work has received funding from the European Research Council (ERC) (under the European Union’s Horizon 2020 research and innovation programme, grant agreement No 694537, project ”SEQCLAS”), the Deutsche Forschungsgemeinschaft (DFG; grant agreement NE 572/81, project ”CoreTec”), and eBay Inc. The GPU cluster used for the experiments was partially funded by DFG Grant INST 222/11681. The work reflects only the authors’ views and none of the funding agencies is responsible for any use that may be made of the information it contains.
References
 Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Version 4.
 Brown et al. (1993) Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational linguistics, 19(2):263–311.
 Burlot and Yvon (2018) Franck Burlot and François Yvon. 2018. Using monolingual data in neural machine translation: a systematic study. In Proceedings of the Third Conference on Machine Translation (WMT 2018), pages 144–155.
 Clark et al. (2011) Jonathan H Clark, Chris Dyer, Alon Lavie, and Noah A Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL 2011), pages 176–181.
 Cotterell and Kreutzer (2018) Ryan Cotterell and Julia Kreutzer. 2018. Explaining and generalizing backtranslation through wakesleep. arXiv preprint arXiv:1806.04402. Version 1.
 Edunov et al. (2018) Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding backtranslation at scale. arXiv preprint arXiv:1808.09381. Version 2.
 Hieber et al. (2017) Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A toolkit for neural machine translation. arXiv preprint arXiv:1712.05690. Version 2.

Hinton et al. (1995)
Geoffrey E Hinton, Peter Dayan, Brendan J Frey, and Radford M Neal. 1995.
The” wakesleep” algorithm for unsupervised neural networks.
Science, 268(5214):1158–1161.  Hoang et al. (2018) Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative backtranslation for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation (WNMT 2018), pages 18–24.
 Imamura et al. (2018) Kenji Imamura, Atsushi Fujita, and Eiichiro Sumita. 2018. Enhancement of encoder and attention using target monolingual corpora in neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation (WNMT 2018), pages 55–63.
 Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Version 9.
 Koehn et al. (2007) Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris CallisonBurch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL 2007), pages 177–180.
 Och and Ney (2003) Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics, pages 19–51.
 Ott et al. (2018) Myle Ott, Michael Auli, David Granger, and Marc’Aurelio Ranzato. 2018. Analyzing uncertainty in neural machine translation. arXiv preprint arXiv:1803.00047. Version 4.
 Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics (ACL 2002), pages 311–318.
 Pereyra et al. (2017) Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton. 2017. Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548. Version 1.

Peter et al. (2018)
JanThorsten Peter, Eugen Beck, and Hermann Ney. 2018.
Sisyphus, a workflow manager designed for machine translation and
automatic speech recognition.
In
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018)
, pages 84–89.  Sennrich et al. (2016a) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), pages 86–96.
 Sennrich et al. (2016b) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), pages 1715–1725.
 Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems (NIPS 2017), pages 6000–6010.
Comments
There are no comments yet.