Log In Sign Up

Self-Training for Unsupervised Neural Machine Translation in Unbalanced Training Data Scenarios

by   Haipeng Sun, et al.

Unsupervised neural machine translation (UNMT) that relies solely on massive monolingual corpora has achieved remarkable results in several translation tasks. However, in real-world scenarios, massive monolingual corpora do not exist for some extremely low-resource languages such as Estonian, and UNMT systems usually perform poorly when there is not an adequate training corpus for one language. In this paper, we first define and analyze the unbalanced training data scenario for UNMT. Based on this scenario, we propose UNMT self-training mechanisms to train a robust UNMT system and improve its performance in this case. Experimental results on several language pairs show that the proposed methods substantially outperform conventional UNMT systems.


page 1

page 2

page 3

page 4


Robust Unsupervised Neural Machine Translation with Adversarial Training

Unsupervised neural machine translation (UNMT) has recently attracted gr...

Pre-training via Leveraging Assisting Languages and Data Selection for Neural Machine Translation

Sequence-to-sequence (S2S) pre-training using large monolingual data is ...

Improving a Multi-Source Neural Machine Translation Model with Corpus Extension for Low-Resource Languages

In machine translation, we often try to collect resources to improve its...

Controlling Utterance Length in NMT-based Word Segmentation with Attention

One of the basic tasks of computational language documentation (CLD) is ...

Meta-Learning for Low-Resource Unsupervised Neural MachineTranslation

Unsupervised machine translation, which utilizes unpaired monolingual co...

Neural machine translation, corpus and frugality

In machine translation field, in both academia and industry, there is a ...

Improving Non-autoregressive Neural Machine Translation with Monolingual Data

Non-autoregressive (NAR) neural machine translation is usually done via ...