Back-translation (Sennrich et al., 2016) allows to naturally exploit monolingual corpora in Neural Machine Translation (NMT) by using a reverse model to generate a synthetic parallel corpus. Despite its simplicity, this technique has become a key component in state-of-the-art NMT systems. For instance, the majority of WMT19 submissions, including the best performing systems, made extensive use of it (Barrault et al., 2019).
While the synthetic parallel corpus generated through back-translation is typically combined with real parallel corpora, iterative or online variants of this technique also play a central role in unsupervised machine translation (Artetxe et al., 2018, 2018, 2019; Lample et al., 2018, 2018; Marie and Fujita, 2018; Conneau and Lample, 2019; Song et al., 2019; Liu et al., 2020). In iterative back-translation, both NMT models are jointly trained using synthetic parallel data generated on-the-fly with the reverse model, alternating between both translation directions iteratively. While this enables fully unsupervised training without parallel corpora, some initialization mechanism is still required so the models can start producing sound translations and provide a meaningful training signal to each other. For that purpose, state-of-the-art approaches rely on either a separately trained unsupervised Statistical Machine Translation (SMT) system, which is used for warmup during the initial back-translation iterations (Marie and Fujita, 2018; Artetxe et al., 2019), or large-scale pre-training through masked denoising, which is used to initialize the weights of the underlying encoder-decoder (Conneau and Lample, 2019; Song et al., 2019; Liu et al., 2020).
In this paper, we aim to understand the role that the initialization mechanism plays in iterative back-translation. For that purpose, we mimic the experimental settings of Artetxe et al. (2019), and measure the effect of using different initial systems for warmup: the unsupervised SMT system proposed by Artetxe et al. (2019) themselves, supervised NMT and SMT systems trained on both small and large parallel corpora, and a commercial Rule-Based Machine Translation (RBMT) system. Despite the fundamentally different nature of these systems, our analysis reveals that iterative back-translation has a strong tendency to converge to a similar solution. Given the relatively small impact of the initial system, we conclude that future research on unsupervised machine translation should focus more on improving the iterative back-translation mechanism itself.
2 Iterative back-translation
We next describe the iterative back-translation implementation used in our experiments, which was proposed by Artetxe et al. (2019). Note, however, that the underlying principles of iterative back-translation are very general, so our conclusions should be valid beyond this particular implementation.
The method in question trains two NMT systems in opposite directions following an iterative process where, at every iteration, each model is updated by performing a single pass over a set of synthetic parallel sentences generated through back-translation. After iteration , the synthetic parallel corpus is entirely generated by the reverse NMT model. However, so as to ensure that the NMT models produce sound translations and provide meaningful training signal to each other, the first warmup iterations progressively transition from a separate initial system to the reverse NMT model itself. More concretely, iteration uses back-translated sentences from the reverse initial system, and the remaining sentences are generated by the reverse NMT model. In the latter case, half of the translations use random sampling (Edunov et al., 2018), which produces more varied translations, whereas the other half are generated through greedy decoding, which produces more fluent and predictable translations. Following Artetxe et al. (2019), we set and , and perform a total of 60 such iterations. Both NMT models use the big transformer implementation from Fairseq111https://github.com/pytorch/fairseq
, training with a total batch size of 20,000 tokens with the exact same hyperparameters asOtt et al. (2018). At test time, we use beam search decoding with a beam size of 5.
3 Experimental settings
So as to better understand the role of initialization in iterative back-translation, we train different English-German models using the following initial systems for warmup:
Supervised NMT: We use the Fairseq implementation of the big transformer model using the same hyperparameters as Ott et al. (2018). We train two separate models: one using the concatenation of all parallel corpora from WMT 2014, and another one using a random subset of 100,000 sentences. In both cases, we use early stopping according to the cross-entropy in newstest2013.
Supervised SMT: We use the Moses (Koehn et al., 2007) implementation of phrase-based SMT (Koehn et al., 2003) with default hyperparameters, using FastAlign (Dyer et al., 2013) for word alignment. We train two separate models using the same parallel corpus splits as for NMT. In both cases, we use a 5-gram language model trained with KenLM (Heafield et al., 2013) on News Crawl 2007-2013, and apply MERT tuning (Och, 2003) over newstest2013.
Unsupervised: We use the unsupervised SMT system proposed by Artetxe et al. (2019)
, which induces an initial phrase-table using cross-lingual word embedding mappings, combines it with an n-gram language model, and further improves the resulting model through unsupervised tuning and joint refinement.
For each initial system, we train a separate NMT model through iterative back-translation as described in Section 2. For that purpose, we use the News Crawl 2007-2013 monolingual corpus as distributed in the WMT 2014 shared task.222Note that the final systems do not see any parallel data during training, even if some initial systems are trained on parallel data. Thanks to this, we can measure the impact of the initial system in a controlled environment, which is the goal of the paper. In practical settings, however, better results could likely be obtained by combining real and synthetic parallel corpora. Preprocessing is done using standard Moses tools, and involves punctuation normalization, tokenization with aggressive hyphen splitting, and truecasing.
We evaluate in newstest2014 using tokenized BLEU, and compare the performance of the different final systems after iterative back-translation and the initial systems used in their warmup.333Note that all systems use the exact same tokenization, so the reported BLEU scores are comparable among them. However, this only provides a measure of the quality of the different systems, but not the similarity of the translations they produce. So as to quantify how similar the translations of two systems are, we compute their corresponding BLEU scores taking one of them as the reference. This way, we report the average similarity of each final system with the rest of final systems, and analogously for the initial ones. Finally, we also compute the similarity between each initial system and its corresponding final system, which measures how much the final solution found by iterative back-translation differs from the initial one.
reports the test scores of different initial systems along with their corresponding final systems after iterative-backtranslation. As it can be seen, the standard deviation across final systems is substantially lower than across initial systems (1.7 vs 5.6 in German-to-English and 1.4 vs 4.9 in English-to-German), which shows that iterative back-translation tends to converge to solutions of a similar quality. This way, while the initial system does have certain influence in final performance, differences greatly diminish after applying iterative back-translation. For instance, the full NMT system is 13.4 points better than the RBMT system in German-to-English, but this difference goes down to 2.3 points after iterative back-translation.
Interestingly, better initial systems do not always lead to better final systems. For instance, the initial RBMT system is weaker than both the unsupervised system and the small SMT system, yet it leads to a better final system after iterative back-translation. Similarly, the small SMT model is substantially better than the small NMT model in German-to-English (19.6 vs 15.2), yet they both lead to the exact same BLEU score of 25.0 after iterative back-translation. We hypothesize that certain properties of the initial system are more relevant than others and, in particular, our results suggest that the adequacy and lexical coverage of the initial systems has a larger impact than its fluency.
At the same time, it is remarkable that iterative back-translation has a generally positive impact, bringing an average improvement of 4.9 BLEU points for German-to-English and 4.3 BLEU points for English-to-German. Nevertheless, the full NMT system is a notable exception, as the final system learned through iterative back-translation is weaker than the initial system used for warmup. This reinforces the idea that iterative back-translation converges to a solution of a similar quality regardless of that of the initial system, to the extent that it can even deteriorate performance when the initial system is very strong.
So as to get a more complete picture of this behavior, Table 2 reports the average similarity between each final system and the rest of the final systems, and analogously for the initial ones. As it can be seen, final systems trained through iterative back-translation tend to produce substantially more similar translations than the initial systems used in their warmup (49.3 vs 28.2 for German-to-English and 42.9 vs 24.3 for English-to-German). This suggests that iterative back-translation does not only converge to solutions of similar quality, but also to solutions that have a similar behavior. Interestingly, this also applies to systems that follow a fundamentally different paradigm as it is the case of RBMT. In relation to that, note that the similarity of each final system and its corresponding initial system is rather low, which reinforces the idea that the solution found by iterative back-translation is not heavily dependent on the initial system.
5 Related work
Originally proposed by Sennrich et al. (2016), back-translation has been widely adopted by the machine translation community (Barrault et al., 2019), yet its behavior is still not fully understood. Several authors have studied the optimal balance between real and synthetic parallel data, concluding that using too much synthetic data can be harmful (Poncelas et al., 2018; Fadaee and Monz, 2018; Edunov et al., 2018). In addition to that, Fadaee and Monz (2018) observe that back-translation is most helpful for tokens with a high prediction loss, and use this insight to design a better selection method for monolingual data. At the same time, Edunov et al. (2018) show that random sampling provides a stronger training signal than beam search or greedy decoding. Closer to our work, the impact of the system used for back-translation has also been explored by some authors (Sennrich et al., 2016; Burlot and Yvon, 2018), although the iterative back-translation variant, which allows to jointly train both systems so they can help each other, was not considered, and synthetic data was always combined with real parallel data.
While all the previous authors use a fixed system to generate synthetic parallel corpora, Hoang et al. (2018) propose performing a second iteration of back-translation. Iterative back-translation was also explored by Marie and Fujita (2018) and Artetxe et al. (2019) in the context of unsupervised machine translation, relying on an unsupervised SMT system (Lample et al., 2018; Artetxe et al., 2018)
for warmup. Early work in unsupervised NMT also incorporated the idea of on-the-fly back-translation, which was combined with denoising autoencoding and a shared encoder initialized through unsupervised cross-lingual embeddings(Artetxe et al., 2018; Lample et al., 2018). More recently, several authors have performed large-scale unsupervised pre-training through masked denoising to initialize the full model, which is then trained through iterative back-translation (Conneau and Lample, 2019; Song et al., 2019; Liu et al., 2020). Finally, iterative back-translation is also connected to the reconstruction loss in dual learning (He et al., 2016), which incorporates an additional language modeling loss and also requires a warm start.
In this paper, we empirically analyze the role that initialization plays in iterative back-translation. For that purpose, we try a diverse set of initial systems for warmup, and analyze the behavior of the resulting systems in relation to them. Our results show that differences in the initial systems heavily diminish after applying iterative back-translation. At the same time, we observe that iterative back-translation has a hard ceiling, to the point that it can even deteriorate performance when the initial system is very strong. As such, we conclude that the margin for improvement left for the initialization is rather narrow, encouraging future research to focus more on improving the iterative back-translation mechanism itself.
In the future, we would like to better characterize the specific factors of the initial systems that are most relevant. At the same time, we would like to design a simpler unsupervised system for warmup that is sufficient for iterative back-translation to converge to a good solution. Finally, we would like to incorporate pre-training methods like masked denoising into our analysis.
This research was partially funded by a Facebook Fellowship, the Basque Government excellence research group (IT1343-19), the Spanish MINECO (UnsupMT TIN2017‐91692‐EXP MCIU/AEI/FEDER, UE), Project BigKnowledge (Ayudas Fundación BBVA a equipos de investigación científica 2018), the NVIDIA GPU grant program, Lucy Software / United Language Group (ULG), and the Catalan Agency for Management of University and Research Grants (AGAUR) through an Industrial Ph.D. Grant.
- The comprendium translator system. In Proceedings of the Ninth Machine Translation Summit, Cited by: 1st item.
- Unsupervised neural machine translation. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018), External Links: Cited by: §1, §5.
Unsupervised statistical machine translation.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 3632–3642. External Links: Cited by: §1, §5.
- An effective approach to unsupervised machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 194–203. External Links: Cited by: §1, §1, §2, §2, 4th item, §5.
- Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), Florence, Italy, pp. 1–61. External Links: Cited by: §1, §5.
- Using monolingual data in neural machine translation: a systematic study. In Proceedings of the Third Conference on Machine Translation: Research Papers, Brussels, Belgium, pp. 144–155. External Links: Cited by: §5.
- Cross-lingual language model pretraining. In Advances in Neural Information Processing Systems 32, pp. 7059–7069. External Links: Cited by: §1, §5.
- A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Atlanta, Georgia, pp. 644–648. External Links: Cited by: 3rd item.
- Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 489–500. External Links: Cited by: §2, §5.
- Back-translation sampling by targeting difficult words in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 436–446. External Links: Cited by: §5.
- Dual learning for machine translation. In Advances in Neural Information Processing Systems 29, pp. 820–828. External Links: Cited by: §5.
Scalable modified Kneser-Ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Sofia, Bulgaria, pp. 690–696. External Links: Cited by: 3rd item.
- Iterative back-translation for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, Melbourne, Australia, pp. 18–24. External Links: Cited by: §5.
Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, Prague, Czech Republic, pp. 177–180. External Links: Cited by: 3rd item.
- Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pp. 127–133. External Links: Cited by: 3rd item.
- Unsupervised machine translation using monolingual corpora only. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018), External Links: Cited by: §1, §5.
- Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 5039–5049. External Links: Cited by: §1, §5.
- Multilingual denoising pre-training for neural machine translation. arXiv preprint arXiv:2001.08210. Cited by: §1, §5.
- Unsupervised neural machine translation initialized by unsupervised statistical machine translation. arXiv preprint arXiv:1810.12703. Cited by: §1, §5.
- Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, Sapporo, Japan, pp. 160–167. External Links: Cited by: 3rd item.
- Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, Brussels, Belgium, pp. 1–9. External Links: Cited by: §2, 2nd item.
- Investigating backtranslation in neural machine translation. arXiv preprint arXiv:1804.06189. Cited by: §5.
- Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany, pp. 86–96. External Links: Cited by: §1, §5.
MASS: masked sequence to sequence pre-training for language generation.
Proceedings of the 36th International Conference on Machine Learning, K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 5926–5936. External Links: Cited by: §1, §5.