Iterative Refinement for Machine Translation

10/20/2016 ∙ by Roman Novak, et al. ∙ 0

Existing machine translation decoding algorithms generate translations in a strictly monotonic fashion and never revisit previous decisions. As a result, earlier mistakes cannot be corrected at a later stage. In this paper, we present a translation scheme that starts from an initial guess and then makes iterative improvements that may revisit previous decisions. We parameterize our model as a convolutional neural network that predicts discrete substitutions to an existing translation based on an attention mechanism over both the source sentence as well as the current translation output. By making less than one modification per sentence, we improve the output of a phrase-based translation system by up to 0.4 BLEU on WMT15 German-English translation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

MLKX

Machine Learning Knowledge Exchange


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Existing decoding schemes for translation generate outputs either left-to-right, such as for phrase-based or neural translation models, or bottom-up as in syntactic models [Koehn et al.2003, Galley et al.2004, Bahdanau et al.2015]. All decoding algorithms for those models make decisions which cannot be revisited at a later stage, such as when the model discovers that it made an error earlier on.

On the other hand, humans generate all but the simplest translations by conceiving a rough draft of the solution and then iteratively improving it until it is deemed complete. The translator may modify a clause she tackled earlier at any point and make arbitrary modifications to improve the translation.

It can be argued that beam search allows to recover from mistakes, simply by providing alternative translations. However, reasonable beam sizes encode only a small number of binary decisions. A beam of size contains fewer than six binary decisions, all of which frequently share the same prefix [Huang2008].111

In this paper, we present models that tackle translation similar to humans. The model iteratively edits the target sentence until it cannot improve it further. As a preliminary study, we address the problem of finding mistakes in an existing translation via a simple classifier that predicts if a word in a translation is correct (§

2). Next, we model word substitutions for an existing translation via a convolutional neural network that attends to the source when suggesting substitutions (§3). Finally, we devise a model that attends both to the source as well as to the existing translation (§4

). We repeatedly apply the models to their own output by determining the best substitution for each word in the previous translation and then choosing either one or zero substitutions for each sentence. For the latter we consider various heuristics as well as a classifier-based selection method (§

5).

Our results demonstrate that we can improve the output of a phrase-based translation system on WMT15 German-English data by up to 0.4 BLEU [Papineni et al.2002] by making on average only substitutions per sentence (§6).

Our approach differs from automatic post-editing since it does not require post-edited text which is a scarce resource [Simard et al.2007, Bojar et al.2016]. For our first model (§3) we merely require parallel text and for our second model (§4) the output of a baseline translation system.

2 Detecting Errors

Before correcting errors we consider the task of detecting mistakes in the output of an existing translation system.

In the following, we use lowercase boldface for vectors (e.g.

), uppercase boldface for matrices (e.g. ) and calligraphy for sets (e.g. ). We use superscripts for indexing or slicing, e.g., , , . We further denote as the source sentence, as the guess translation from which we start and which was produced by a phrase-based translation system (§6.1), and as the reference translation. Sentences are vectors of indices indicating entries in a source vocabulary or a target vocabulary . For example, with . We omit biases of linear layers to simplify the notation.

Error detection focuses on word-level accuracy, i.e., we predict for each token in a given translation if it is present in the reference or not. This metric ignores word order, however, we hope that performance on this simple task provides us with a sense of how difficult it will be to modify translations to a positive effect. A token in the candidate translation is deemed correct iff it is present in the reference translation: . We build a neural network to predict correctness of each token in given the source sentence :

where estimates

Architecture. We use an architecture similar to the word alignment model of Align. The source and the target sequences are embedded via a lookup table that replace each word type with a learned vector. The resulting vector sequences are then processed by alternating convolutions and non-linearities. This results in a vector representing each position in the source and a vector representing each position in the target

. These vectors are then compared via a dot product. Our prediction estimates the probability of a target word being correct as the largest dot product between any source word and the guess word. We apply the logistic function

to this score,

Training. At training time we minimize the cross-entropy loss, with the binary supervision for , otherwise.

Testing. At test time we threshold the model prediction to detect mistakes. We compare the performance of our network to the following baselines:

  1. Predicting that all candidate words are always correct , or always incorrect .

  2. The prior probability of a word being correct based on the training data

    .

We report word-level accuracy metrics in Table 1. While the model significantly improves over the baselines, the probability of correctly labeling a word as a mistake remains low (). The task of predicting mistakes is not easy as previously shown in confidence estimation [Blatz et al.2004, Ueffing and Ney2007]. Also, one should bear in mind that this task cannot be solved with accuracy since a sentence can be correctly in multiple different ways and we only have a single reference translation. In our case, our final refinement objective might be easier than error detection as we do not need to detect all errors. We need to identify some of the locations where a substitution could improve BLEU. At the same time, our strategy should also suggest these substitutions. This is the objective of the model introduced in the next section.

Metric (%)
Accuracy 68.0 32.0 71.3 76.0
Recall 0.00 100.00 36.0 61.3
Precision 100.0 32.0 58.4 62.7
F1 0.00 48.4 44.5 62.0
Table 1: Accuracy of the error detection model compared to baselines on the concatenation of the WMT test sets from to . For precision, recall and F1 we consider a positive prediction as labeling a word as a mistake. Baseline labels all words as correct, labels all words as incorrect, labels a word from based on the prior probability estimated on the training data.

3 Attention-based Model

We introduce a model to predict modifications to a translation which can be trained on bilingual text. In §5 we discuss strategies to iteratively apply this model to its own output in order to improve a translation.

Our model takes as input a source sentence and a target sentence , and outputs a distribution over the vocabulary for each target position,

For each position and any word , estimates , the probability of word being at position given the source and the target context surrounding . In other words, we learn a non-causal language model [Bengio et al.2003] which is also conditioned on the source .

Architecture. We rely on a convolutional model with attention. The source sentence is embedded into distributional space via a lookup table, followed by convolutions and non-linearities. The target sentence is also embedded in distributional space via a lookup table, followed by a single convolution and a succession of linear layers and non-linearities. The target convolution weights are zeroed at the center so that the model does not have access to the center word. This means that the model observes a fixed size context of length for any target position , where refers to the convolution kernel width. These operations result in a vector representing each position in the source sentence and a vector representing each target context .

Given a target position , an attention module then takes as input these representation and outputs a weight for each target position

These weights correspond to dot-product attention scores [Luong et al.2015, Rush et al.2015]. The attention weights allow to compute a source summary specific to each target context through a weighted sum,

Finally, this summary is concatenated with the embedding of the target context obtained from the target lookup table,

and a multilayer perceptron followed by a softmax computes

from . Note that we could alternatively use instead of but our preliminary validation experiments showed better result with the lookup table output.

Training. The model is trained to maximize the (log) likelihood of the pairs from the training set.

Testing. At test time the model is given , i.e., the source and the guess sentence. Similar to maximum likelihood training for left-to-right translation systems [Bahdanau et al.2015], the model is therefore not exposed to the same type of context in training (reference contexts from ) and testing (guess contexts from ).

Discussion. Our model is similar to the attention-based translation approach of bahdanau:2015:iclr. In addition to using convolutions, the main difference is that we have access to both left and right target context since we start from an initial guess translation. Right target words are of course good predictors of the previous word. For instance, an early validation experiment with the setup from §6.1 showed a perplexity of for this model which compares to with the same model trained with the left context only.

4 Dual Attention Model

We introduce a dual attention architecture to also make use of the guess at training time. This contrasts with the model introduced in the previous section where the guess is not used during training. Also, we are free to use the entire guess, including the center word, compared to the reference where we have to remove the center word.

At training time, the dual attention model takes

inputs, that is, the source, the guess and the reference. At test time, the reference input is replaced by the guess. Specifically, the model

estimates for each position in the reference sentence.

Architecture. The model builds upon the single attention model from the previous section by having two attention functions with distinct parameters. The first function takes the source sentence and the reference context to produce the source summary for this context as in the single attention model. The second function takes the guess sentence and the reference context and produces a guess summary for this context These two summaries are then concatenated with the lookup representation of the reference context and input to a final multilayer perceptron followed by a softmax. The reference lookup table contains the only parameters shared by the two attention functions.

Training. This model is trained similarly to the single attention model, the only difference being the conditioning on the guess .

Testing. At test time, the reference is unavailable and we replace with , i.e., the model is given to make a prediction at position . In this case, the distribution shift when going from training to testing is less drastic than in §3 and the model retains access to the whole via attention.

Discussion. Compared to the single attention model (§3), this model reduces perplexity from to on our validation set. Since the dual attention model can attend to all guess words, it can copy any guess word if necessary. In our dataset, 68% of guess words are in the reference and can therefore be copied. This also means that for the remaining 32% of reference tokens the model should not copy. Instead, the model should propose a substitution by itself (§6.1). During testing, the fact that the guess is input twice means that the guess and the prediction context always match. This makes the model more conservative in its predictions, suggesting tokens from more often than the single attention model. However, as we show in §6, this turns out beneficial in our setting.

5 Iterative Refinement

The models in §3 and §4 suggest word substitutions for each position in the candidate translation given the current surrounding context.

Applying a single substitution changes the context of the surrounding words and requires updating the model predictions. We therefore perform multiple rounds of substitution. At each round, the model computes its predictions, then our refinement strategy selects a substitution and performs it unless the strategy decides that it can no longer improve the target sentence. This means that the refinement procedure should be able to (i) prioritize the suggested substitutions, and (ii) decide to stop the iterative process.

We determine the best edit for each position in by selecting the word with the highest probability estimate:

Then we compute a confidence score in this prediction , possibly considering the prediction for the current guess word at the same position.

These scores are used to select the next position to edit,

and to stop the iterative process, i.e., when the confidence falls below a validated threshold . We also limit the number of substitutions to a maximum of . We consider different heuristics for ,

  • Score positions based on the model confidence in , i.e.,

  • Look for high confidence in the suggested substitution and low confidence in the current word :

  • Train a simple binary classifier taking as input the score of the best predicted word and the current guess word:

    where nn is a 2-layer neural network trained to predict whether a substitution leads to an increase in BLEU or not.

We compare the above strategies, different score thresholds , and the maximum number of modifications per sentence allowed in §6.2.

6 Experiments & Results

We first describe our experimental setup and then discuss our results.

6.1 Experimental Setup

Data. We perform our experiments on the German-to-English WMT15 task [Bojar et al.2015] and benchmark our improvements against the output of a phrase-based translation system (PBMT; Koehn et al. 2007) on this language pair. In principle, our approach may start from any initial guess translation. We chose the output of a phrase-based system because it provides a good starting point that can be computed at high speed. This allows us to quickly generate guess translations for the millions of sentences in our training set.

All data was lowercased and numbers were mapped to a single special “number” token. Infrequent tokens were mapped to an “unknown” token which resulted in dictionaries of K and K words for English and German respectively.

For training we used M sentence triples (source, reference, and the guess translation output by the PBMT system). A validation set of 180K triples was used for neural network hyper-parameter selection and learning rate scheduling. Finally, two K subsets of the validation set were used to train the classifier discussed in §5 and to select the best model architecture (single vs dual attention) and refinement heuristic.

The initial guess translations were generated with phrase-based systems trained on the same training data as our refinement models. We decoded the training data with ten systems, each trained on 90% of the training data in order to decode the remaining 10%. This procedure avoids the bias of generating guess translation with a system that was trained on the same data.

Implementation.

All models were implemented in Torch

[Collobert et al.2011]

and trained with stochastic gradient descent to minimize the cross-entropy loss.

For the error detection model in §2 we used two temporal convolutions on top of the lookup table, each followed by a non-linearity to compute and . The output dimensions of each convolution was set to and the receptive fields spanned 5 words, resulting in final outputs summarizing a context of 9 words.

For the single attention model we set the shared context embedding dimension and use a context of size words to the left and to the right, resulting in a window of size 9 for the source and 8 for the target. The final multilayer perceptron has 2 layers with a hidden dimension of , see §3).

For the dual attention model we used 2-layer context embeddings (a convolution followed by a linear with a tanh in between), each having output dimension , context of size . The final multilayer perceptron has 2 layers with a hidden dimension of , see §4). In this setup, we replaced dot-product attention with MLP attention [Bahdanau et al.2015] as it performed better on the validation set.

All weights were initialized randomly apart from the word embedding layers, which were pre-computed with Hellinger Principal Component Analysis

[Lebret and Collobert2014] applied to the bilingual co-occurrence matrix constructed on the training set. The word embedding dimension was set to 256 for both languages and all models.

6.2 Results

Table 2 compares BLEU of the single and dual attention models ( vs ) over the validation set. It reports the performance for the best threshold and the best maximum number of modifications per sentence for the different refinement heuristics.

The best performing configuration is with the product-based heuristic thresholded at for up to substitutions. We report test performance of this configuration in table 3. Tables 6, 6 and 6 show examples of system outputs. Overall the system obtains a small but consistent improvement over all the test sets.

Model Heuristic Best Best BLEU
PBMT Baseline 30.02
0.8 3 30.21
0.7 3 30.20
0.5 1 30.19
0.6 7 30.32
0.5 5 30.35
0.4 2 30.33
Table 2: Validation results of different model architectures, substitution heuristics, decision thresholds , and number of maximum allowed modifications . Accuracy is reported on a 3041 sentence subset of the validation set.
newstest PBMT BLEU Our BLEU
2008 21.29 21.60 0.31
2009 20.42 20.74 0.32
2010 22.82 23.13 0.31
2011 21.43 21.65 0.22
2012 21.78 22.10 0.32
2013 24.99 25.37 0.38
2014 22.76 23.07 0.31
2015 24.40 24.80 0.40
Mean 22.49 22.81 0.32
Table 3: Test accuracy on WMT test sets after applying our refinement procedure.

Figure 1 plots accuracy versus the number of allowed substitutions and Figure 2 shows the percentage of actually modified tokens. The dual attention model (§4) outperforms single attention (§3). Both models achieve most of improvement by making only 1-2 substitutions per sentence. Thereafter only very few substitutions are made with little impact on BLEU. Figure 2 shows that the models saturate quickly, indicating convergence of the refinement output to a state where the models have no more suggestions.

Figure 1: BLEU as a function of the total number of substitutions allowed per sentence. Values are reported on a small K validation set for the single and dual attention models using the best scoring heuristic and threshold (cf. Table 2).
Figure 2: Percentage of modified tokens on the validation set as a function of the total number of substitutions allowed per sentence. All models modify fewer than of tokens.

To isolate the model contribution from the scoring heuristic, we replace the scoring heuristic with an oracle while keeping the rest of the refinement strategy the same. We consider two types of oracle: The full oracle takes the suggested substitution for each position and then selects which single position should be edited or whether to stop editing altogether. This oracle has the potential to find the largest BLEU improvement. The partial oracle does not select the position, it just takes the heuristic suggestion for the current step and decides whether to edit or stop the process. Notice that both oracles have very limited choice, as they are only able to perform substitutions suggested by our model.

Figure 3 reports the performance of our best single and dual attention models compared to both oracles on the validation set; Figure 4 shows the corresponding number of substitutions. The full and partial oracles result in an improvement of and BLEU over the baseline in the dual attention setting (compared to with ).

Figure 3: BLEU as a function of the total number of substitutions allowed per sentence. Left: best dual-attention refinement strategy (Dual_product) versus two oracles. The full oracle (Dual_full_oracle) accepts as input and selects a single to substitute . The partial oracle (Dual_partial_oracle) lets the model choose position as well () but has the ability to prevent substitution if it does not improve BLEU. Right: same for the best single attention setup.
Figure 4: Percentage of modified tokens as a function of total number of substitutions allowed per sentence for the dual attention model (left) and the single attention model (right) compared to the full and partial oracles (cf. Figure 3).

In the single-attention setup the oracles yields a higher improvement ( and ) and they also perform more substitutions. This supports our earlier conjecture (§4) that is more conservative and prone to copying words from the guess compared to the single attention model. While helpful in validation, the cautious nature of the dual model restricts the options of the oracle.

We make several observations. First, word-prediction models provide high-quality substitutions that can lead to a significant improvements in BLEU (despite that both oracles are limited in their choice of ). This is supported by the simple heuristic performing very close to more sophisticated strategies (Table  2).

Second, it is important to have a good confidence estimate on whether a substitution will improve BLEU or not. The full oracle, which yields BLEU, acts as an estimate to having a real-valued confidence measure and replaces the scoring heuristic . The partial oracle, yielding BLEU, assesses the benefit of having a binary-valued confidence measure. The latter oracle can only prevent our model from making a BLEU-damaging substitution. However, confidence estimation is a difficult task as we found in §2.

Finally, we demonstrate that a significant improvement in BLEU can be achieved through very few substitutions. The full and partial oracle modify only 1.69% and 0.99% of tokens, or 0.4 and 0.24 modifications per sentence, respectively. Of course, oracle substitution assumes access to the reference which is not available at test time. At the same time, our oracle is more likely to generate fluent sentences since it only has access to substitutions deemed likely by the model as opposed to an unrestricted oracle that is more likely to suggest improvements leading to unreasonable sentences. Note that our oracles only allow substitutions (no deletions or insertions), and only those that raise BLEU in a monotonic fashion, with each single refinement improving the score of the previous translation.

7 Conclusion and Future Work

We present a simple iterative decoding scheme for machine translation which is motivated by the inability of existing models to revisit incorrect decoding decisions made in the past. Our models improve an initial guess translation via simple word substitutions over several rounds. At each round, the model has access to the source as well as the output of the previous round, which is an entire translation of the source. This is different to existing decoding algorithms which make predictions based on a limited partial translation and are unable to revisit previous erroneous decoding decisions.

Our results increase translation accuracy by up to BLEU on WMT15 German-English translation and modify only words per sentence. In our experimental setup we start with the output of a phrase-based translation system but our model is general enough to deal with arbitrary guess translations.

We see several future work avenues from here. Experimenting with different initial guess translations such as the output of a neural translation system, or even the result of a simple dictionary-based word-by-word translation scheme. Also one can envision editing a number of guess translations simultaneously by expanding the dual attention mechanism to attend over multiple guesses.

So far we only experimented with word substitution, one may add deletion, insertion or even swaps of single or multi-word units. Finally, the dual-attention model in §4 may present a good starting point for neural multi-source translation [Schroeder et al.2009].

Acknowledgments

We would like to thank Marc’Aurelio Ranzato and Sumit Chopra for helpful discussions related to this work.

References

  • [Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR. Association for Computational Linguistics, May.
  • [Bengio et al.2003] Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. J. Mach. Learn. Res., 3:1137–1155, March.
  • [Blatz et al.2004] John Blatz, Erin Fitzgerald, George F. Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto Sanchís, and Nicola Ueffing. 2004. Confidence estimation for machine translation. In Proc. of COLING.
  • [Bojar et al.2015] Ondej Bojar, Rajan Chatterjee, Christian Federmann, Barry Haddow, Chris Hokamp, Matthias Huck, Varvara Logacheva, and Pavel Pecina, editors. 2015. Proceedings of the Tenth Workshop on Statistical Machine Translation. Association for Computational Linguistics, Lisbon, Portugal, September.
  • [Bojar et al.2016] Ondej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno-Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana L. Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin M. Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In WMT.
  • [Collobert et al.2011] R. Collobert, K. Kavukcuoglu, and C. Farabet. 2011.

    Torch7: A matlab-like environment for machine learning.

    In BigLearn, NIPS Workshop.
  • [Galley et al.2004] Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? pages 273–280, Boston, MA, USA, May.
  • [Huang2008] Liang Huang. 2008.

    Forest-based algorithms in natural language processing

    .
    Ph.D. thesis, University of Pennsylvania.
  • [Koehn et al.2003] Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. pages 127–133, Edmonton, Canada, May.
  • [Koehn et al.2007] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of ACL.
  • [Lebret and Collobert2014] Rémi Lebret and Ronan Collobert. 2014. Word embeddings through hellinger pca. In 14th Conference of the European Chapter of the Association for Computational Linguistics.
  • [Legrand et al.2016] Joel Legrand, Michael Auli, and Ronan Collobert. 2016. Neural network-based word alignment through score aggregation. In Proceedings of WMT.
  • [Luong et al.2015] Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Lluís Màrquez, Chris Callison-Burch, Jian Su, Daniele Pighin, and Yuval Marton, editors, EMNLP, pages 1412–1421. The Association for Computational Linguistics.
  • [Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA. Association for Computational Linguistics.
  • [Rush et al.2015] Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for sentence summarization. In Proc. of EMNLP.
  • [Schroeder et al.2009] Josh Schroeder, Trevor Cohn, and Philipp Koehn. 2009. Word lattices for multi-source translation. In Proc. of EACL.
  • [Simard et al.2007] Michel Simard, Cyril Goutte, and Pierre Isabelle. 2007. Statistical phrase-based post-editing. In Proc. of NAACL.
  • [Ueffing and Ney2007] Nicola Ueffing and Hermann Ney. 2007. Word-level confidence estimation for machine translation. Computational Linguistics, 33:9–40.

Appendix A Examples

new york city erwägt ebenfalls ein solches .
new york city is also considering this .
new york city is also a such .
our new york city is also considering this .
papa , ich bin 22 !
dad , i 'm 22 !
papa , i am 22 .
our papa , i am 22 !

esme nussbaum senkte ihren kopf .
esme nussbaum lowered her head .
esme nussbaum slashed its head .
our esme nussbaum lowered her head .
großbritannien importiert 139.000 tonnen .
uk imports 139,000 tons .
britain imported 139,000 tonnes .
our britain imports 139,000 tonnes .

alles in deutschland wird subventioniert , von der kohle über autos bis zur landwirtschaft .
everything is subsidised in germany , from coal , to cars and farmers .
all in germany , subsidised by the coal on cars to agriculture .
everything in germany is subsidised by the coal on cars to agriculture .

drei männer , die laut aussage der behörden als fahrer arbeiteten , wurden wegen des besitzes und des beabsichtigten verkaufs von marihuana und kokain angeklagt .
three men who authorities say worked as drivers were charged with possession of marijuana and cocaine with intent to distribute .
three men who , according to the authorities have been worked as a driver , because of the possession and the planned sale of marijuana and cocaine .
three men who , according to the authorities , were working as a driver , because of the possession and the intended sale of marijuana and cocaine .