Unsupervised Training for Large Vocabulary Translation Using Sparse Lexicon and Word Classes

01/06/2019
by   Yunsu Kim, et al.
0

We address for the first time unsupervised training for a translation task with hundreds of thousands of vocabulary words. We scale up the expectation-maximization (EM) algorithm to learn a large translation table without any parallel text or seed lexicon. First, we solve the memory bottleneck and enforce the sparsity with a simple thresholding scheme for the lexicon. Second, we initialize the lexicon training with word classes, which efficiently boosts the performance. Our methods produced promising results on two large-scale unsupervised translation tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset