Iterative Refinement for Machine Translation

10/20/2016
by   Roman Novak, et al.
0

Existing machine translation decoding algorithms generate translations in a strictly monotonic fashion and never revisit previous decisions. As a result, earlier mistakes cannot be corrected at a later stage. In this paper, we present a translation scheme that starts from an initial guess and then makes iterative improvements that may revisit previous decisions. We parameterize our model as a convolutional neural network that predicts discrete substitutions to an existing translation based on an attention mechanism over both the source sentence as well as the current translation output. By making less than one modification per sentence, we improve the output of a phrase-based translation system by up to 0.4 BLEU on WMT15 German-English translation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2018

Phrase-Based Attentions

Most state-of-the-art neural machine translation systems, despite being ...
research
06/17/2017

Towards Neural Phrase-based Machine Translation

In this paper, we present Neural Phrase-based Machine Translation (NPMT)...
research
12/07/2016

Improving the Performance of Neural Machine Translation Involving Morphologically Rich Languages

The advent of the attention mechanism in neural machine translation mode...
research
08/23/2023

Sign Language Translation with Iterative Prototype

This paper presents IP-SLT, a simple yet effective framework for sign la...
research
11/07/2019

Improving Grammatical Error Correction with Machine Translation Pairs

We propose a novel data synthesis method to generate diverse error-corre...
research
02/27/2015

Local Translation Prediction with Global Sentence Representation

Statistical machine translation models have made great progress in impro...
research
10/10/2017

Confidence through Attention

Attention distributions of the generated translations are a useful bi-pr...

Please sign up or login with your details

Forgot password? Click here to reset