Triangular Architecture for Rare Language Translation

05/13/2018
by   Shuo Ren, et al.
0

Neural Machine Translation (NMT) performs poor on the low-resource language pair (X,Z), especially when Z is a rare language. By introducing another rich language Y, we propose a novel triangular training architecture (TA-NMT) to leverage bilingual data (Y,Z) (may be small) and (X,Y) (can be rich) to improve the translation performance of low-resource pairs. In this triangular architecture, Z is taken as the intermediate latent variable, and translation models of Z are jointly optimized with a unified bidirectional EM algorithm under the goal of maximizing the translation likelihood of (X,Y). Empirical results demonstrate that our method significantly improves the translation quality of rare languages on MultiUN and IWSLT2012 datasets, and achieves even better performance combining back-translation methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2019

Overcoming the Rare Word Problem for Low-Resource Language Pairs in Neural Machine Translation

Among the six challenges of neural machine translation (NMT) coined by (...
research
05/01/2017

Data Augmentation for Low-Resource Neural Machine Translation

The quality of a Neural Machine Translation system depends substantially...
research
04/07/2020

Unsupervised Neural Machine Translation with Indirect Supervision

Neural machine translation (NMT) is ineffective for zero-resource langua...
research
05/26/2023

On the Copying Problem of Unsupervised NMT: A Training Schedule with a Language Discriminator Loss

Although unsupervised neural machine translation (UNMT) has achieved suc...
research
09/08/2022

Knowledge Based Template Machine Translation In Low-Resource Setting

Incorporating tagging into neural machine translation (NMT) systems has ...
research
11/02/2018

Bi-Directional Differentiable Input Reconstruction for Low-Resource Neural Machine Translation

We aim to better exploit the limited amounts of parallel text available ...
research
09/07/2022

On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation

Pre-Training (PT) of text representations has been successfully applied ...

Please sign up or login with your details

Forgot password? Click here to reset