-
Edinburgh Neural Machine Translation Systems for WMT 16
We participated in the WMT 2016 shared news translation task by building...
read it
-
Simple Fusion: Return of the Language Model
Neural Machine Translation (NMT) typically leverages monolingual data in...
read it
-
Improving Neural Text Simplification Model with Simplified Corpora
Text simplification (TS) can be viewed as monolingual translation task, ...
read it
-
Unsupervised Pretraining for Neural Machine Translation Using Elastic Weight Consolidation
This work presents our ongoing research of unsupervised pretraining in n...
read it
-
Text Repair Model for Neural Machine Translation
In this work, we train a text repair model as a post-processor for Neura...
read it
-
Synthetic Source Language Augmentation for Colloquial Neural Machine Translation
Neural machine translation (NMT) is typically domain-dependent and style...
read it
-
Fast derivation of neural network based document vectors with distance constraint and negative sampling
A universal cross-lingual representation of documents is very important ...
read it
Improving Neural Machine Translation Models with Monolingual Data
Neural Machine Translation (NMT) has obtained state-of-the art performance for several language pairs, while only using parallel data for training. Target-side monolingual data plays an important role in boosting fluency for phrase-based statistical machine translation, and we investigate the use of monolingual data for NMT. In contrast to previous work, which combines NMT models with separately trained language models, we note that encoder-decoder NMT architectures already have the capacity to learn the same information as a language model, and we explore strategies to train with monolingual data without changing the neural network architecture. By pairing monolingual training data with an automatic back-translation, we can treat it as additional parallel training data, and we obtain substantial improvements on the WMT 15 task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We also show that fine-tuning on in-domain monolingual and parallel data gives substantial improvements for the IWSLT 15 task English->German.
READ FULL TEXT
Comments
There are no comments yet.