Coverage Embedding Models for Neural Machine Translation

05/10/2016
by   Haitao Mi, et al.
0

In this paper, we enhance the attention-based neural machine translation (NMT) by adding explicit coverage embedding models to alleviate issues of repeating and dropping translations in NMT. For each source word, our model starts with a full coverage embedding vector to track the coverage status, and then keeps updating it with neural networks as the translation goes. Experiments on the large-scale Chinese-to-English task show that our enhanced model improves the translation quality significantly on various test sets over the strong large vocabulary NMT system.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/19/2016

Modeling Coverage for Neural Machine Translation

Attention mechanism has enhanced state-of-the-art Neural Machine Transla...
research
06/29/2018

Neural Machine Translation with Key-Value Memory-Augmented Attention

Although attention-based Neural Machine Translation (NMT) has achieved r...
research
11/06/2017

Synthetic and Natural Noise Both Break Neural Machine Translation

Character-based neural machine translation (NMT) models alleviate out-of...
research
08/14/2019

Adabot: Fault-Tolerant Java Decompiler

Reverse Engineering(RE) has been a fundamental task in software engineer...
research
10/04/2017

Enhanced Neural Machine Translation by Learning from Draft

Neural machine translation (NMT) has recently achieved impressive result...
research
12/19/2022

Optimal Transport for Unsupervised Hallucination Detection in Neural Machine Translation

Neural machine translation (NMT) has become the de-facto standard in rea...
research
01/10/2018

Translating Pro-Drop Languages with Reconstruction Models

Pronouns are frequently omitted in pro-drop languages, such as Chinese, ...

Please sign up or login with your details

Forgot password? Click here to reset