A multitude of datasets and models have been developed in natural language processing for a wide variety of tasks and applications. However, a large proportion of these have focused on English. Many works have contributed resources for other languages, developing specialized models for each language of interest is not scalable, not to mention difficult for low resource languages where labeled data is exceptionally scarce.
Recent work in multilingual NLP shows promise for incorporating many languages into one architecture. For example, the mBART Liu et al. (2020) model trains on twenty five different languages and can be finetuned for various different tasks. For translation, mBART was finetuned on bitext (bilingual finetuning). However, while mBART was trained on a variety of languages, the multilingual nature of the pretraining is not used during finetuning. Finetuning on bitext to translate from one language to another does not leverage the full capacity of the multilingual pretraining. Instead, we propose multilingual finetuning of pretrained models, and we demonstrate large improvements compared to bilingual finetuning.
Previous work Aharoni et al. (2019); Arivazhagan et al. (2019b); Zhang et al. (2020) has explored multilingual translation by training multiple directions within the same model from scratch, but this approach faces challenges for mid to low resource languages. In lower resource scenarios, bitext data is usually unavailable in large quantities, making it challenging to train from scratch. In contrast, monolingual data exists even for low resource languages, particularly in resources such as Wikipedia or Commoncrawl, a version of the web. Thus, leveraging this monolingual data through pretraining can provide a much stronger starting point for low resource machine translation tasks.
However, unlike training a multilingual model from scratch, pretrained models are limited to the choices made during pretraining. For example, mBART was only trained on 25 languages, so finetuning to translate on a model not part of these 25 languages is not possible. Thus, people are restricted to the languages selected to train the initial model, as it is incredibly computationally intensive to retrain from scratch. In this work, we show that existing pretrained models, such as mBART Liu et al. (2020) can be extended to additional languages. We demonstrate by doubling the number of languages supported by mBART — to 50 — without loss of performance on the original 25 languages and without starting from scratch. This allows languages to be added flexibly, while preserving the broader utility of the pretrained model, as it can be used for tasks beyond translation.
Further, working in a multilingual setting remains challenging, as various different datasets, evaluation settings, and preprocessing such as tokenization are used. Benchmarks for sentence embeddings Hu et al. (2020), natural language inference Conneau et al. (2018), and question answering Lewis et al. (2019b) exist, but there is not yet a setting for machine translation. To this end, we contribute the ML50
benchmark, a dataset of 50 languages with publicly available training and evaluation sets, including high, mid, and extremely low resource directions. We will open source this benchmark for the community.
We make three main contributions:
An effective and novel approach for multilingual translation models with multilingual pretraining (with monolingual data) followed by multilingual finetuning (with parallel data). In the Many-to-English setting, multilingual finetuning achieves a 3.6 BLEU improvement over bilingual finetuning, and 2.6 BLEU improvement compared to multilingual models trained from scratch. On average, combining Many-to-English and English-to-Many, multilingual finetuning improves BLEU points over the strongest baseline.
We show that existing pretrained models, such as mBART, can be extended to incorporate additional languages without training from scratch and without performance loss on the original languages. We release mBART50 for the community to use, which has double the number of languages of the original mBART.
To facilitate reproducible research on multilingual translation with representative challenges of the real world, we create the ML50 benchmark covering high, mid, and low resource languages and consisting of 230M bitext.
2 Related work
2.1 Multilingual Denoising Pretraining
This work is related to recent progress of pretraining techniques for NLP applications Peters et al. (2018); Radford et al. (2018); Devlin et al. (2019); Liu et al. (2019); Song et al. (2019); Lewis et al. (2019a). In particular, recent works explored pre-training on multilingual unlabeled corpus Lample and Conneau (2019); Conneau et al. (2019); Liu et al. (2020); Tran et al. (2020), and significantly improved the performance of fine-tuning on machine translation between two languages. We extend liu2020multilingual by allowing fine-tuning in multilingual settings.
2.2 Multilingual Neural Machine Translation
Training a universal translation system between multiple languages Firat et al. (2016); Johnson et al. (2017) has shown enormous improvement for translating low-resource languages Gu et al. (2018), and even enabling zero-shot translation Gu et al. (2019); Arivazhagan et al. (2019a). arivazhagan2019massively indicates that it is essential to train gigantic models with enough capacity to fully leverage massive multilingual corpora.
A closely related concurrent work, siddhant2020leveraging shows it is possible to train a multilingual system jointly with monolingual datasets based on song2019mass. It naturally enables translation for languages without parallel data. In contrast, this work focuses on fine-tuning multilingual translation systems given a pre-trained model.
3 Multilingual Translation from Denoising Pretraining
We briefly describe the pretrained multilingual BART model and present multilingual finetuning, a technique to convert pretrained models into multilingual machine translation systems.
multilingual BART (mBART) Liu et al. (2020) is a sequence-to-sequence generative pretraining scheme. The model incorporates languages by concatenating data: where each is a collection of monolingual documents in language
. mBART is trained as a denoising autoencoder, training to predict the original textgiven where is a noising function that corrupts text. We maximize :
where is an instance in language and the distribution is defined by the seq-to-seq model. This model is pretrained using two types of noise in — random span masking and order permutation — as described in Liu et al. (2020).
3.1 Multilingual Finetuning
To leverage multilingual pretraining to create translation systems, previous work Liu et al. (2020) used mBART as a starting point and then performed bilingual finetuning. Concretely, the seq-to-seq model was finetuned on language to language translation. However, bilingual finetuning does not leverage the full capacity of multilingual pretraining. Recent work on multilingual translation Aharoni et al. (2019); Arivazhagan et al. (2019b) displays that strong translation models can be created by doing multilingual training rather than using bilingual tranining. Instead of training a model from language to language , a model is trained to translate N languages to N other languages.
Thus, we propose to do multilingual finetuning (ML-FT) to adapt pretrained models to become multilingual models. This procedure creates one model capable of translating many languages to many other languages, which has efficiency and storage maintenance benefits. Further, multilingual finetuning retains several benefits of multilingual translation models in general, for example allowing languages of similar family to benefit each other.
To perform multilingual finetuning, we collect bitexts of different language pairs into a collection for each direction . Following mBART Liu et al. (2020), we augment each bitext pair by adding a source language token and a target language token at the beginning of and respectively to form a target language token augmented pair . We then initialize a transformer based seq-to-seq model by the pretained mBART, and provide the multilingual bitexts to finetune the pretrained model.
Multilingual Translation Model Variants
We explore configurations to create different versions of multilingual translation models: Many-to-one (), one-to-Many (), and Many-to-Many () via a pivot language. Concretely, the Many-to-one model encodes languages and decodes to English, while the one-to-Many model encodes English and decodes into languages. Finally, the Many-to-Many model encodes and decodes languages. We follow Arivazhagan et al. (2019b) and use pivot data through English to create Many-to-Many models.
When training multilingual models with many languages, the training dataset sizes are imbalanced as different languages have different quantities of bitext. Thus, we train with temperature upsampling, which upsamples lower resource pairs so that the high resource languages do not dominate the training data. We follow Arivazhagan et al. (2019b) and use the following temperature based sampling function with temperature to sample data for each direction:
4 Results from Multilingual Finetuning on Languages
We first examine the impact of multilingual finetuning directly on existing pretrained models. We present results on the 25 languages included in mBART, using the existing mBART model. First, we describe three strong baselines: bilingual finetuning, bilingual translation models from scratch, and multilingual translation models from scratch. Then, we describe our experimental setting. Finally, we present results on 25 languages, showing that on average, multilingual finetuning improves BLEU over the strongest baseline — 1.0 BLEU point improvement over the strongest to-English baseline while difference to the strongest from-English baseline.
|Data||Translation to English||Translation from English|
|Data||Translation to English||Translation from English|
|ML-FT vs BL-FT||ML-FT vs ML-SC||ML-FT vs BL-FT||ML-FT vs ML-SC|
We compare our proposed multilingual finetuning to three strong baselines: bilingual training from scratch, bilingual finetuning, and multilingual models trained from scratch.
Bilingual Trained from Scratch (BL-Scratch)
We train bilingual translation models with standard Transformer Vaswani et al. (2017) models 1115 layers with 512 embedding dimension, 2048 FFN embedding dimension, and 8 heads for both encoder and decoder for translation into and from English to languages. For directions with more than 1 million bitext training data (de, cs, fr, ja, es, ru, pl, zh, fi, lv, lt, and hi ), we train Transformer Big models 2226 layers with 1024 embedding dimension, 4096 FFN embedding dimension, and 16 heads for both encoder and decoder as there is more data to benefit from additional model capacity. For directions with more than 10 million bitext training data (de, cs, fr, ja, es, ru, pl, and zh), we train Transformer Large models 33312 layers with 1024 embedding dimension, 4096 FFN embedding dimension, and 16 heads for both encoder and decoder as there is even more data to benefit from additional model capacity. The best performing bilingual model is selected as the Bilingual Train from Scratch baseline.
Bilingual Finetuning (BL-FT)
Bilingual finetuning adapts the mBART model into bilingual machine translation models by training for longer on translation bitext. For each language direction, we follow Liu et al. (2020) and finetune for K updates to obtain the Bilingual Finetuning baseline.
Multilingual Trained from Scratch (ML-SC)
We train different multlilingual models from scratch: Many-to-one (N1), one-to-Many (1N), and Many-to-Many (NN) with English as pivot. We train for K updates and sweep through different batch sizes, learning rates, and upsampling temperature for best performing multilingual model on validation, using GPUs for each training instance. Following Arivazhagan et al. (2019b), we train with temperature upsampling.
4.2 Evaluation and Generation
We evaluate performance with tokenized BLEU, following the tokenization in mBART Liu et al. (2020). To generate, we decode using beam search with beam size with length penalty on the validation set. We do not perform checkpoint averaging. To select the best performing model in a sweep, we compare BLEU on the validation set.
4.3 Performance on Languages
We first evaluate our proposed multilingual finetuning technique on languages using the existing mBART model. We compare bilingual finetuning from mBART (BL-FT), multilingual training from scratch (ML-SC), and multilingual finetuning (ML-FT) by quantifying the BLEU improvement over the bilingual training from scratch baseline. Results are displayed in Table 1, separated into three settings: Many-to-one (N1), one-to-Many (1N), and Many-to-Many (NN).
Performance of Multilingual Finetuning
Compared to the BL-FT and ML-SC baselines, multilingual finetuning has consistently stronger results in the Many-to-one setting, translating from 25 different languages into English. The improvement is 7.9 BLEU points stronger than the bilingual from scratch baseline, and 1.0 BLEU points stronger than the the strongest baseline, ML-SC.
However, in the one-to-Many setting, improvement of all multilingual methods against bilingual baselines is lower across the board. We hypothesize this is due to the challenge of needing to decode into many different languages (additional analysis is presented in Section 6.1). Multilingual finetuning method is BLEU points stronger than the bilingual from scratch baseline; it is also comparable to the strongest baseline — bilingual finetuning with BLEU difference on average.
Finally, in the Many-to-Many setting, improvement of all many-to-many multilingual methods against bilingual baselines is lower across the board. Again we hypothesize this is due to the challenge of decoding into many different languages including English (additional analysis is presented in Section 6.1). Multilingual finetuning method is BLEU points stronger than the bilingual from scratch baseline for translation from and into English combined. Overall, it is lower than the strongest from-English and into-English baselines combined with BLEU difference on average.
Performance by Resource Level
Comparing the languages by resource level, we see that the improvement from multilingual training is more significant as the quantity of training bitext decreases. For example, in the multilingual finetuning (ML-FT) Many-to-one setting, improvement over bilingual from scratch is 4.4 BLEU points for languages with more than 10M bitext, but is 18.0 BLEU points for languages with 7K-30K available bitext. The trend is less consistent in the one-to-Many setting, but low resource languages still see improvements. For example, with multilingual finetuning (ML-FT), improvement over bilingual from scratch is 2.2 BLEU for languages with more than 10M bitext, but 7.6 BLEU for languages with 7K-30K available bitext.
|10M+||German, Czech, French, Japanese, Spanish, Russian, Polish, Chinese|
|1M - 10M||Finnish, Latvian, Lithuanian, Hindi, Estonian|
|100k to 1M||Tamil, Romanian, Pashto, Sinhala, Malayalam, Dutch, Nepali, Italian, Arabic, Korean, Hebrew, Turkish, Khmer, Farsi, Vietnamese, Croatian, Ukrainian|
|10K to 100K||Thai, Indonesian, Swedish, Portuguese, Xhosa, Afrikaans, Kazakh, Urdu, Macedonian, Telugu, Slovenian, Burmese, Georgia|
|10K-||Marathi, Gujarati, Mongolian, Azerbaijani, Bengali|
5 Results from Multilingual Finetuning on Languages
Multilingual finetuning showed strong improvements on languages in the Many-to-one setting and we subsequently extend to incorporate a greater number of languages — 50 instead of 25. However, the number of languages possible is limited by the initial selection of languages in mBART. To remedy this, we show that the number of languages in mBART can be easily extended with additional pretraining. Second, we build the ML50 benchmark, to standardize training data, evaluation data, and evaluation procedure across 50 different languages. Finally, we display results of multilingual finetuning from mBART on 50 languages and show strong improvements over the baselines.
5.1 Doubling the Languages in mBART
We describe how we extend existing pretrained models to incorporate a greater number of languages. This technique allows existing models to be used on new languages, rather than needing to restart a computationally intensive pretraining method from scratch.
While multilingual pretrained models have shown strong performance in a variety of tasks Liu et al. (2020); Conneau et al. (2019), they remain limited as they are trained on a fixed number of languages. For example, mBART was trained on 25 languages, all fairly high resource. Pretraining fully from scratch is computationally intensive — mBART trained for 2.5 weeks on 256 Nvidia V100 GPUs Liu et al. (2020). However, there are hundreds of different languages in the world, so restarting pretraining from scratch to add any of them to mBART would be difficult. Instead, we take the existing mBART model, trained on languages, and show that it can be extend to more than languages. We take the public available pretrained mBART model444https://github.com/pytorch/fairseq/tree/master/examples/mbart which was pretrained on
languages and extend its embedding layers with randomly initialized vectors for an extra set oflanguage tokens. We then combine the monolingual data of original languages and the new languages together to continue pretraining this extended MBART model. We will release the mBART50 model as a general purpose multilingual pretrained model, which will be useful for a variety of generation tasks beyond machine translation.
Data and Training Details
We use the mBART.cc25 checkpoint Liu et al. (2020) available in the fairseq library Ott et al. (2019) to continue the pretraining process. We use the monolingual data from XLMR Conneau et al. (2019) to extend the pretraining to a set of languages in addition to the languages mBART model. To be consistent mBART, we reuse its K sentencepiece Kudo and Richardson (2018) model which was trained using monolingual data for languages from XLMR, and thus already supports languages beyond the original 25 mBART was trained on. For pre-training, we train mBART50 for an additional K updates with a batch size of tokens. The sizes of the monolingual data for the additional 50 languages is provided in the appendix.
|Data||Translation to English||Translation from English|
|Data||Translation to English||Translation from English|
|ML-FT vs BL-FT||ML-FT vs ML-SC||ML-FT vs BL-FT||ML-FT vs ML-SC|
5.2 Ml50 Benchmark
To demonstrate the impact of multilingual finetuning on additional languages, we create the ML50 Benchmark. ML50 standardizes the training and evaluation schemes across 50 different languages, from extremely low resource languages like Xhosa and Gujarati to high resource languages like French and German. The full list of languages is shown in Table 3. We group the languages into five categories based on the amount of available training data: more than 10M pairs (8 languages), 1M to 10M pairs (5 languages), 100k to 1M pairs (17 languages), 10K to 100K pairs (13 languages), and finally, less than 10K pairs of training data (5 languages). ML50 includes languages in N language families, from Germanic and Romance languages to Indic and African ones. Many additional languages we contribute are lower resource, compared to the languages in the original mBART.
We gather parallel data between English and 49 other languages to form ML50, to enable the training of machine translation models. We select these 49 languages based on the amount of parallel and monolingual data to cover languages with different amount of resources and under different language families. The quantity of available monolingual data is relevant for pretraining, so we want to ensure there is a sufficient amount. All of the data is publicly available, such as WMT, IWSLT, WAT, TED, and other published research works. For training data, each language pair can include multiple sources. We simply concatenate them together and remove duplicated source-target sentence pairs for each language pair. We use fasttext Joulin et al. (2017) to perform language identification on both source and target sentences, and we remove sentences pairs if either source or target sentence is not predicted as expected language. We further filter out training data that match to any source or target side sentences in evaluation datasets. Compared to other datasets such as opus100, the ML50 benchmark contains around 4 times more training data. The full list of languages, data sources, and amount of resulting data can be found in Table 6 in the Appendix.
To ensure high quality evaluation of languages covered in ML50, we include publicly available, widely used evaluation sets. We source these evaluation datasets from translation workshops such as WMT, IWSLT, WAT, and other published research works. We follow the evaluation protocol, including tokenization, used for each of these evaluation sets, to ensure our results are comparable with existing work. We release these scripts to make it easier for others. Compared to other datasets such as opus100, we choose to use high quality existing evaluation datasets rather than use part of the training data as evaluation. This is because training data, particularly for low resource languages, is often very noisy and unreliable.
5.3 Performance on 50 Languages
We evaluate the performance of mBART50 on the ML50 Benchmark. We compare to the same baselines — bilingual finetuning, bilingual training from scratch, and multilingual training from scratch. Results are displayed in Table 4.
In the Many-to-One setting averaged across all languages, multilingual finetuning improves over the strongest baseline, multilingual many-to-many from scratch, by 2.5 BLEU points. For lower resource language pairs, the improvement is much more significant. For example, the improvement for languages with 4K-10K training data is 4.8 BLEU points over the strongest baseline, and the improvement for languages with 10K-100K training data is 4+ BLEU over the strongest baseline.
For One-to-Many, the performance of all methods — bilingual finetuning, multilingual from scratch, and multilingual finetuning — is similar. On average, all models have around 5.7 to 7 BLEU points improvement over bilingual baselines.
Finally, in Many-to-Many, multilingual finetuning achieves 0.8 improvement in the to-English direction over the strongest baseline. In the from-English direction, the performance of Many-to-Many from multilingual finetuning is similar to multilingual from scratch, both around 5.5 to 6 BLEU improvement over bilingual baselines.
5.4 Comparison to Bilingual Finetuning
We examine the performance of our proposed multilingual finetuning method compared to bilingual finetuning. Current work shows that strong translation models can be created by finetuning pretrained models to bilingual translation models. However, this means that a separate model would need to be created for each translation direction of interest, which creates a large quantity of models that need to be finetuned. In contrast, multilingual finetuning allows a multitude of directions to be captured within one model.
However, multilingual finetuning would mean that the same model capacity must model many directions rather than just one, which could decrease performance. In Figure 1, we analyze the improvement of multilingual finetuning over the bilingual finetuning. On the left, we compare the Many-to-one setting translating into English, and on the right we compare the one-to-Many setting translating out of English to many different languages.
In the Many-to-one setting, every language pair except one is improved by multilingual finetuning. Some low resource languages see substantial improvement of 10+ BLEU points, with the largest improvement being over 15 BLEU improvement. On average, multilingual finetuning improves BLEU across all directions into English. In the one-to-Many setting, performance is about the same between multilingual finetuning and bilingual finetuning, with the average improvement at BLEU across all directions out of English comparing to bilingual baselines.
6.1 Challenges of one-to-Many
In the Many-to-one setting, where models must encode various different languages and decode into English, large improvements are seen when doing multilingual modeling. Previous work has similarly observed this improvement Arivazhagan et al. (2019b) in multilingual training from scratch, as multilingual modeling increases the quantity of target-side English data seen by the model. For example, compared to bilingual finetuning, our multilingual finetuning model is exposed to English target side data from 50 different language pairs.
However, in the one-to-Many setting and the Many-to-Many setting, models must decode into 50 different languages. This is a difficult decoding challenge, as a strong conditional language model must be learned for each language. While pretraining exposes the model to monolingual data, the quantity of monolingual data varies for each language. For lower resource languages, such as Gujarati or Xhosa, the quantity of monolingual data available even through online resources such as Commoncrawl, remains limited. Other work Arivazhagan et al. (2019b) observes similar trends in performance of one-to-Many.
Overall, we find that multilingual finetuning performs better than any of our assessed baselines — bilingual training from scratch, bilingual finetuning, and multilingual training from scratch — when averaged across the Many-to-one and one-to-Many directions. It is important to note that this effect mainly comes from the strong improvement of the Many-to-one setting, and all approaches have similar performance in the one-to-Many setting.
6.2 Comparison of mBART50 on 25 Languages
We show that the mBART model can be extended from 25 languages to 50 languages without starting from scratch. In this section, we evaluate if adding additional languages is harmful for performance on the original 25 languages. As the model remains the same size but has more to model, it could have reduced capacity for the original 25 languages, but we do not see any reduction in performance. Results are shown in Figure 2. For each language, we plot the performance when doing bilingual finetuning with mBART25 and mBART50. We show that performance is almost exactly the same with both models, indicating that the number of languages can be doubled without loss of performance.
We demonstrate that multilingual neural machine translation models can be created from pretrained models such as mBART. Previous work using pretrained models focused only on bilingual finetuning, and work in multilingual translation trained only from scratch. While using pretrained models could limit the number of languages possible, we show that mBART can be extended to double the number of original languages, without loss of performance on the original languages. We release mBART50 for the community as a strong generative denoising pretrained model in 50 different languages. Further, to train and evaluate on 50 languages, we develop and release theML50 benchmark. In conclusion, we show that by performing multilingual finetuning, strong improvements of over 2 BLEU points can be achieved in the Many-to-one setting. Overall, averaging across the Many-to-one and one-to-Many directions, our proposed multilingual finetuning strategy outperforms all baselines.
- Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 3874–3884. External Links: Cited by: §1, §3.1.
- The missing ingredient in zero-shot neural machine translation. arXiv preprint arXiv:1903.07091. Cited by: §2.2.
- Massively multilingual neural machine translation in the wild: findings and challenges. arXiv preprint arXiv:1907.05019. Cited by: §1, §3.1, §3.1, §3.1, §4.1, §6.1, §6.1.
- Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. Cited by: §2.1, §5.1, §5.1.
- XNLI: evaluating cross-lingual sentence representations. arXiv preprint arXiv:1809.05053. Cited by: §1.
- BERT: pre-training of deep bidirectional transformers for language understanding. In North American Association for Computational Linguistics (NAACL), Cited by: §2.1.
- Multi-way, multilingual neural machine translation with a shared attention mechanism. In NAACL, Cited by: §2.2.
- Universal neural machine translation for extremely low resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, Louisiana, pp. 344–354. External Links: Cited by: §2.2.
- Improved zero-shot neural machine translation via ignoring spurious correlations. arXiv preprint arXiv:1906.01181. Cited by: §2.2.
- Xtreme: a massively multilingual multi-task benchmark for evaluating cross-lingual generalization. arXiv preprint arXiv:2003.11080. Cited by: §1.
- Google’s multilingual neural machine translation system: enabling zero-shot translation. Transactions of the Association for Computational Linguistics 5, pp. 339–351. Cited by: §2.2.
- Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pp. 427–431. Cited by: §5.2.
- SentencePiece: a simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Brussels, Belgium, pp. 66–71. External Links: Cited by: §5.1.
- Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291. Cited by: §2.1.
BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Cited by: §2.1.
- MLQA: evaluating cross-lingual extractive question answering. arXiv preprint arXiv:1910.07475. Cited by: §1.
- Multilingual denoising pre-training for neural machine translation. arXiv preprint arXiv:2001.08210. Cited by: §1, §1, §2.1, §3, §3.1, §3.1, §4.1, §4.2, §5.1, §5.1.
- RoBERTa: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §2.1.
- fairseq: a fast, extensible toolkit for sequence modeling. In North American Association for Computational Linguistics (NAACL): System Demonstrations, Cited by: §5.1.
- Deep contextualized word representations. In North American Association for Computational Linguistics (NAACL), Cited by: §2.1.
Improving language understanding with unsupervised learning. Technical report OpenAI. Cited by: §2.1.
MASS: masked sequence to sequence pre-training for language generation.
International Conference on Machine Learning (ICML), Cited by: §2.1.
- Cross-lingual retrieval for iterative self-supervised training. arXiv preprint arXiv:2006.09526. Cited by: §2.1.
- Attention is all you need. In Advances in neural information processing systems, Cited by: §4.1.
- Improving massively multilingual neural machine translation and zero-shot translation. arXiv preprint arXiv:2004.11867. Cited by: §1.
Appendix A Appendices
|ML50 Train||ML50 Eval|
|ja *||16167141||WMT20||WMT20 dev-split||999||999|
|BL-Scratch to en||39.7||29.0||35.2||18.4||27||37.7||28.4||25.1||24.1||17.9||27.8||20.1|
|BL-FT to en||41.0||32.0||37.4||19.5||30.2||38.5||31.0||25.4||28.8||20.8||30.7||23.8|
|BL-Scratch from en||40||24.8||39||22.2||29||28.5||24.3||33.6||19.7||16.6||13.3||17.5|
|BL-FT from en||41.9||26.5||40.8||24.5||30.3||30.5||26.7||35.1||23.7||19.0||16.1||20.4|
|BL-Scratch to en||23.2||14.2||32.6||8.9||6.1||12.5||32.5||2.8||36.9||33.5||16.4||38.6|
|BL-FT to en||28.3||18.2||37.1||15.0||12.6||18.2||36.5||13.3||42.1||37.5||19.9||42.7|
|BL-Scratch from en||17.5||28.7||32.9||7.3||1.5||17.5||29.3||1.3||33.7||19.7||16.1||27.0|
|BL-FT from en||22.0||34.0||37.4||9.3||4.7||25.5||33.3||6.9||38.1||22.0||20.0||29.7|
|BL-Scratch to en||16.5||4.0||27.6||26.0||33.6||24.5||20.9||28.0||30.8||30.7||0.4||1.0|
|BL-FT to en||22.5||8.3||33.2||31.9||42.0||33.5||28.2||36.9||44.9||46.0||12.1||26.5|
|BL-Scratch from en||16.3||4.3||15.1||28.5||26.0||17.8||30.7||27.2||27.0||27.1||0.2||1.0|
|BL-FT from en||22.7||5.9||18.4||32.9||32.2||24.3||36.5||35.6||38.5||41.6||11.2||18.3|
|BL-Scratch to en||1.4||7.8||14.1||10.9||7.9||3.9||6.1||6.6||2.8||0.0||3.5||2.8|
|BL-FT to en||11.0||28.0||35.8||35.8||28.5||25.1||23.8||34.3||11.6||0.5||11.2||15.5|
|BL-Scratch from en||0.6||8.3||8.2||15.0||4.9||19.8||3.7||4.2||5.2||0.0||3.3||1.9|
|BL-FT from en||5.9||23.7||27.2||38.8||21.9||35.8||13.0||26.7||11.5||0.6||8.5||7.4|