Iterative Refinement for Machine Translation

by   Roman Novak, et al.

Existing machine translation decoding algorithms generate translations in a strictly monotonic fashion and never revisit previous decisions. As a result, earlier mistakes cannot be corrected at a later stage. In this paper, we present a translation scheme that starts from an initial guess and then makes iterative improvements that may revisit previous decisions. We parameterize our model as a convolutional neural network that predicts discrete substitutions to an existing translation based on an attention mechanism over both the source sentence as well as the current translation output. By making less than one modification per sentence, we improve the output of a phrase-based translation system by up to 0.4 BLEU on WMT15 German-English translation.


page 1

page 2

page 3

page 4


Phrase-Based Attentions

Most state-of-the-art neural machine translation systems, despite being ...

Towards Neural Phrase-based Machine Translation

In this paper, we present Neural Phrase-based Machine Translation (NPMT)...

Improving the Performance of Neural Machine Translation Involving Morphologically Rich Languages

The advent of the attention mechanism in neural machine translation mode...

Sign Language Translation with Iterative Prototype

This paper presents IP-SLT, a simple yet effective framework for sign la...

Improving Grammatical Error Correction with Machine Translation Pairs

We propose a novel data synthesis method to generate diverse error-corre...

Local Translation Prediction with Global Sentence Representation

Statistical machine translation models have made great progress in impro...

Confidence through Attention

Attention distributions of the generated translations are a useful bi-pr...

Please sign up or login with your details

Forgot password? Click here to reset