In neural machine translation (NMT) and other natural language generation tasks, it is common practice to improve likelihood-trained models by further tuning their parameters to explicitly maximize an automatic metric of system accuracy – for example, BLEUPapineni et al. (2002) or METEOR Denkowski and Lavie (2014)
. Directly optimizing accuracy metrics involves backpropagating through discrete decoding decisions, and thus is typically accomplished with structured prediction techniques like reinforcement learningRanzato et al. (2016), minimum risk training Shen et al. (2015), and other specialized methods Wiseman and Rush (2016). Generally, these methods work by repeatedly generating a translation under the current parameters (via decoding, sampling, or loss-augmented decoding), comparing the generated translation to the reference, receiving some reward based on their similarity, and finally updating model parameters to increase future rewards.
In the vast majority of work, discriminative training has focused on optimizing BLEU (or its sentence-factored approximation). This is not surprising given that BLEU is the standard metric for system comparison at test time. However, BLEU is not without problems when used as a training criterion. Specifically, since BLEU is based on n-gram precision, it aggressively penalizes lexical differences even when candidates might be synonymous with or similar to the reference: if an n-gram does not exactly match a sub-sequence of the reference, it receives no credit. While the pessimistic nature of BLEU differs from human judgments and is therefore problematic, it may, in practice, pose a more substantial problem for a different reason: BLEU is difficult tooptimize because it does not assign partial credit. As a result, learning cannot hill-climb through intermediate hypotheses with high synonymy or semantic similarity, but low n-gram overlap. Furthermore, where BLEU does assign credit, the objective is often flat: a wide variety of candidate translations can have the same degree of overlap with the reference and therefore receive the same score. This, again, makes optimization difficult because gradients in this region give poor guidance.
In this paper we propose SimiLe, a simple alternative to matching-based metrics like BLEU for use in discriminative NMT training. As a new reward, we introduce a measure of semantic similarity between the generated hypotheses and the reference translations evaluated by an embedding model trained on a large external corpus of paraphrase data. Using an embedding model to evaluate similarity allows the range of possible scores to be continuous and, as a result, introduces fine-grained distinctions between similar translations. This allows for partial credit and reduces the penalties on semantically correct but lexically different translations. Moreover, since the output of SimiLe is continuous, it provides more informative gradients during the optimization process by distinguishing between candidates that would be similarly scored under matching-based metrics like BLEU. Lastly, we show in our analysis that SimiLe has an additional benefit over BLEU by translating words with heavier semantic content more accurately.
To define an exact metric, we reference the burgeoning field of research aimed at measuring semantic textual similarity (STS) between two sentences Le and Mikolov (2014); Pham et al. (2015); Wieting et al. (2016); Hill et al. (2016); Conneau et al. (2017); Pagliardini et al. (2017). Specifically, we start with the method of wieting2017pushing, which learns paraphrastic sentence representations using a contrastive loss and a parallel corpus induced by backtranslating bitext. Wieting and Gimpel showed that simple models that average word or character trigram embeddings can be highly effective for semantic similarity. The strong performance, domain robustness, and computationally efficiency of these models make them good candidates for experimenting with incorporating semantic similarity into neural machine translation. For the purpose of discriminative NMT training, we augment these basic models with two modifications: we add a length penalty to avoid short translations, and calculate similarity by composing the embeddings of subword units, rather than words or character trigrams. We find that using subword units also yields better performance on the STS evaluations and is more efficient than character trigrams.
We conduct experiments with our new metric on the 2018 WMT Bojar et al. (2018) test sets, translating four languages, Czech, German, Russian, and Turkish, into English. Results demonstrate that optimizing SimiLe during training results in not only improvements in the same metric during test, but also in consistent improvements in BLEU. Further, we conduct a human study to evaluate system outputs and find significant improvements in human-judged translation quality for all but one language. Finally, we provide an analysis of our results in order to give insight into the observed gains in performance. Tuning for metrics other than BLEU has not (to our knowledge) been extensively examined for NMT, and we hope this paper provides a first step towards broader consideration of training metrics for NMT.
2 SimiLe Reward Function
Since our goal is to develop a continuous metric of sentence similarity, we borrow from a line of work focused on domain agnostic semantic similarity metrics. We motivate our choice for applying this line of work to training translation models in Section 2.1. Then in Section 2.2, we describe how we train our similarity metric (SIM), how we compute our length penalty, and how we tie these two terms together to form SimiLe.
Our SimiLe metric is based on the sentence similarity metric of Wieting and Gimpel (2018), which we choose as a starting point because it has state-of-the-art unsupervised performance on a host of domains for semantic textual similarity.222In semantic textual similarity the goal is to produce scores that correlate with human judgments on the degree to which two sentences have the same semantics. In embedding based models, including the models used in this paper, the score is produced by the cosine of the two sentence embeddings. Being both unsupervised and domain agnostic provide evidence that the model generalizes well to unseen examples. This is in contrast to supervised methods which are often imbued with the bias of their training data.
Our sentence encoder averages 300 dimensional subword unit333We use sentencepiece which is available at https://github.com/google/sentencepiece. We limited the vocabulary to 30,000 tokens. embeddings to create a sentence representation. The similarity of two sentences, SIM, is obtained by encoding both with
and then calculating their cosine similarity.
We follow Wieting and Gimpel (2018) in learning the parameters of the encoder . The training data is a set of paraphrase pairs444We use 16.77 million paraphrase pairs filtered from the ParaNMT corpus Wieting and Gimpel (2018). The corpus is filtered by a sentence similarity score based on the paragram-phrase from Wieting et al. (2016) and word trigrams overlap, which is calculated by counting word trigrams in the reference and translation, then dividing the number of shared trigrams by the total number in the reference or translation, whichever has fewer. These form a balance between semantic similarity (similarity score) and diversity (trigram overlap). We kept all sentences in ParaNMT with a similarity score 0.5 and a trigram overlap score 0.2. Recently, in Wieting et al. (2019) it has been shown that strong performance on semantic similarity tasks can also be achieved using bitext directly without the need for backtranslation. and we use a margin-based loss:
where is the margin, and is a negative example. The intuition is that we want the two texts to be more similar to each other than to their negative examples. To select , we choose the most similar sentence in a collection of mini-batches called a mega-batch.
|SIM (300 dim.)||69.2||60.7||77.0||80.1||78.4|
|Wieting and Gimpel (2018)||67.8||62.7||77.4||80.3||78.1|
Finally, we note that SIM is robust to domain, as shown by its strong performance on the STS tasks which cover a broad range of domains. We note that SIM was trained primarily on subtitles, while we use news data to train and evaluate our NMT models. Despite this domain switch, we are able to show improved performance over a baseline using BLEU, providing more evidence of the robustness of this method.
Our initial experiments showed that when using just the similarity metric, SIM, there was nothing preventing the model from learning to generate long sentences, often at the expense of repeating words. This is the opposite case from BLEU, where the n-gram precision is not penalized for generating too few words. Therefore, in BLEU, a brevity penalty (BP) was introduced to penalize sentences when they are shorter than the reference. The penalty is:
where is the reference and is the generated hypothesis, with and their respective lengths. We experimented with modifying this penalty to only penalize generated sentences that are longer than the target (so we switch and in the equation). However, we found that this favored short sentences. We instead penalize a generated sentence if its length differs at all from that of the target. Therefore, our length penalty is:
Our final metric, which we refer to as SimiLe, is defined as follows:
In initial experiments we found that performance could be improved slightly by lessening the influence of LP, so we fix to be 0.25.
There is a vast literature on metrics for evaluating machine translation outputs automatically (For instance, WMT metrics task papers like Bojar et al. (2017)). In this paper we demonstrate that training towards metrics other than BLEU has significant practical advantages in the context of NMT. While this could be done with any number of metrics, in this paper we experiment with a single semantic similarity metric, and due to resource constraints leave a more extensive empirical comparison of other evaluation metrics to future work. That said, we designed SimiLe as a semantic similarity model with high accuracy, domain robustness, and computational efficiency to be used in minimum risk training for machine translation.555SimiLe, including time to segment the sentence, is about 20 times faster than METEOR when code is executed on a GPU (NVIDIA GeForce GTX 1080).
While semantic similarity is not an exact replacement for measuring machine translation quality, we argue that it serves as a decent proxy at least as far as minimum risk training is concerned. To test this, we compare the similarity metric term in SimiLe (SIM) to BLEU and METEOR on two machine quality datasets666We used the segment level data from newstest2015 and newstest2016 available at http://statmt.org/wmt18/metrics-task.html. The former contains 7 language pairs and the latter 5. and report their correlation with human judgments in Table 2. Machine translation quality measures account for more than semantics as they also capture other factors like fluency. A manual error analysis and the fact that the machine translation correlations in Table 2 are close, but the semantic similarity correlations777Evaluation is on the SemEval Semantic Textual Similarity (STS) datasets from 2012-2016 Agirre et al. (2012, 2013, 2014, 2015, 2016). In the SemEval STS competitions, teams create models that need to work well on domains both represented in the training data and hidden domains revealed at test time. Our model and those of Wieting and Gimpel (2018), in contrast to the best performing STS systems, do not use any manually-labeled training examples nor any other linguistic resources beyond the ParaNMT corpus Wieting and Gimpel (2018). in Table 1 are not, suggest that the difference between METEOR and SIM largely lies in fluency. However, not capturing fluency is something that can be ameliorated by adding a down-weighted maximum-likelihood (MLE) loss to the minimum risk loss. This was done by Edunov et al. (2018), and we use this in our experiments as well.
3 Machine Translation Preliminaries
Our model and optimization procedure are based on prior work on structured prediction training for neural machine translation Edunov et al. (2018) and are implemented in Fairseq.888https://github.com/pytorch/fairseq Our architecture follows the paradigm of an encoder-decoder with soft attention Bahdanau et al. (2015) and we use the same architecture for each language pair in our experiments. We use gated convolutional encoders and decoders Gehring et al. (2017). We use 4 layers for the encoder and 3 for the decoder, setting the hidden state size for all layers to 256, and the filter width of the kernels to 3. We use byte pair encoding Sennrich et al. (2015), with a vocabulary size of 40,000 for the combined source and target vocabulary. The dimension of the BPE embeddings is set to 256.
Following Edunov et al. (2018), we first train models with maximum-likelihood with label-smoothing () Szegedy et al. (2016); Pereyra et al. (2017). We set the confidence penalty of label smoothing to be 0.1. Next, we fine-tune the model with a weighted average of minimum risk training () Shen et al. (2015) and (), where the expected risk is defined as:
where is a candidate hypothesis, is a set of candidate hypotheses, and is the reference. Therefore, our fine-tuning objective becomes:
We tune from the set in our experiments. In minimum risk training, we aim to minimize the expected cost. In our case that is or where is the target and is the generated hypothesis. As is commonly done, we use a smoothed version of BLEU by adding 1 to all -gram counts except unigram counts. This is to prevent BLEU scores from being overly sparse Lin and Och (2004). We generate candidates for minimum risk training from -best lists with 8 hypotheses and do not include the reference in the set of candidates.
We optimize our models using Nesterov’s accelerated gradient method Sutskever et al. (2013) using a learning rate of 0.25 and momentum of 0.99. Gradients are renormalized to norm 0.1 Pascanu et al. (2012). We train the
objective for 200 epochs and the combined objective,, for 10. Then for both objectives, we anneal the learning rate by reducing it by a factor of 10 after each epoch until it falls below
. Model selection is done by selecting the model with the lowest validation loss on the validation set. To select models across the different hyperparameter settings, we chose the model with the highest performance on the validation set for the evaluation being considered.
Training models with minimum risk is expensive, but we wanted to evaluate in a difficult, realistic setting using a diverse set of languages. Therefore, we experiment on four language pairs: Czech (cs-en), German (de-en), Russian (ru-en), and Turkish (tr-en) translating to English (en). For training data, we use News Commentary v13999http://data.statmt.org/wmt18/translation-task/training-parallel-nc-v13.tgz provided by WMT Bojar et al. (2018) for cs-en, de-en, and ru-en. For training the Turkish system, we used the WMT 2018 parallel data which consisted of the SETIMES2101010http://opus.lingfil.uu.se/SETIMES2.php corpus. The validation and development sets for de-en, cs-en, and ru-en were the WMT 2016 and WMT 2017 validation sets. For tr-en, the validation set was the WMT 2016 validation set and the WMT 2017 validation and test sets. Test sets for each language were the official WMT 2018 test sets.
4.2 Automatic Evaluation
We first use corpus-level BLEU and the corpus average SIM score to evaluate the outputs of the different experiments. It is important to note that in this case, SIM is not the same as SimiLe. SIM is only the semantic similarity component of SimiLe and therefore lacks the length penalization term. We used this metric to estimate the degree to which the semantic content of a translation and its reference overlap. When evaluating semantic similarity, we find that SIM outperforms SimiLe marginally as shown in Table 1.
We compare systems trained with 4 objectives:
MLE: Maximum likelihood with label smoothing
BLEU: Minimum risk training with 1-BLEU as the cost
SimiLe: Minimum risk training with 1-SimiLe as the cost
Half: Minimum risk training with a new cost that is half BLEU and half SimiLe:
The results are shown in Table 4. From the table, we see that using SimiLe performs the best when using BLEU and SIM as evaluation metrics for all four languages. It is interesting that using SimiLe in the cost leads to larger BLEU improvements than using BLEU alone, the reasons for which we examine further in the following sections. It is important to emphasize that increasing BLEU was not the goal of our proposed method, human evaluations were our target, but this is a welcome surprise. Similarly, using BLEU as the cost function leads to large gains in SIM, though these gains are not as large as when using SimiLe in training.
4.3 Human Evaluation
We also perform human evaluation, comparing MLE training with minimum risk training using SimiLe and BLEU as costs. We selected 200 sentences along with their translation from the respective test sets of each language. The sentences were selected nearly randomly with the only constraints that they be between 3 and 25 tokens long and also that the outputs for SimiLe and BLEU were not identical. The translators then assigned a score from 0-5 based on how well the translation conveyed the information contained in the reference.111111Wording of the evaluation is available in Section A.1.
From the table, we see that minimum risk training with SimiLe as the cost scores the highest across all language pairs except Turkish. Turkish is also the language with the lowest test BLEU (See Table 4). An examination of the human-annotated outputs shows that in Turkish (unlike the other languages) repetition was a significant problem for the SimiLe system in contrast to MLE or BLEU. We hypothesize that one weakness of SimiLe may be that it needs to start with some minimum level of translation quality in order to be most effective. The biggest improvement over BLEU is on de-en and ru-en, which have the highest MLE BLEU scores in Table 4 which further lends credence to this hypothesis.
5 Quantitative Analysis
We next analyze our model using the validation set of the de-en data unless stated otherwise. We chose this dataset for the analysis since it had the highest MLE BLEU scores of the languages studied.
5.1 Partial Credit
We analyzed the distribution of the cost function for both SimiLe and BLEU on the de-en validation set before any fine-tuning. Again, using an -best list size of 8, we computed the cost for all generated translations and plotted their histogram in Figure 1. The plots show that the distribution of scores for SimiLe
and BLEU are quite different. Both distributions are not symmetrical Gaussian, however the distribution of BLEU scores is significantly more skewed with much higher costs. This tight clustering of costs provides less information during training.
Next, for all -best lists, we computed all differences between scores of the hypotheses in the beam. Therefore, for a beam size of 8, this results in 28 different scores. We found that of the 86,268 scores, the difference between scores in an -best list is 99.0% of the time for SimiLe, but 85.1% of the time for BLEU. The average difference is 4.3 for BLEU and 4.8 for SimiLe, showing that SimiLe makes finer grained distinctions among candidates.
5.2 Validation Loss
We next analyze the validation loss during training of the de-en model for both using SimiLe and BLEU as costs. We use the hyperparameters of the model with the highest BLEU on the validation set for model selection. Since the distributions of costs vary significantly between SimiLe and BLEU, with BLEU having much higher costs on average, we compute the validation loss with respect to both cost functions for each of the two models.
In Figure 2, we plot the risk objective for the first 10 epochs of training. In the top plot, we see that the risk objective for both BLEU and SimiLe decreases much faster when using SimiLe to train than BLEU. The expected BLEU also reaches a significantly lower value on the validation set when training with SimiLe. The same trend occurs in the lower plot, this time measuring the expected SimiLe cost on the validation set.
From these plots, we see that optimizing with SimiLe results in much faster training. It also reaches a lower validation loss, and from Table 4, we’ve already shown that the SimiLe and BLEU on the test set are higher for models trained with SimiLe. To hammer home the point at how much faster the models trained with SimiLe reach better performance, we evaluated after just 1 epoch of training and found that the model trained with BLEU had SIM/BLEU scores of 86.71/27.63 while the model trained with SimiLe had scores of 87.14/28.10. A similar trend was observed in the other language pairs as well, where the validation curves show a much larger drop-off after a single epoch when training with SimiLe than with BLEU.
5.3 Effect of -best List Size
As mentioned in Section 3, we used an -best list size of 8 in our minimum risk training experiments. In this section, we train de-en translation models with various -best list sizes and investigate the relationship between beam size and test set performance when using SimiLe or BLEU as a cost. We hypothesize that since BLEU is not as fine-grained a metric as SimiLe, expanding the number of candidates would close the gap between BLEU and SimiLe as BLEU would have access to a more candidates with more diverse scores. The results of our experiment on the are shown in Figure 3 and show that models trained with SimiLe actually improve in BLEU and SIM more significantly as -best list size increases. This is possibly due to small -best sizes inherently upper-bounding performance regardless of training metric, and SimiLe being a better measure overall when the -best is sufficiently large to learn.
|I will tell you my personal opinion of him. .||BLEU||2||I will have a personal opinion on it.|
|SimiLe||4||I will tell my personal opinion about it.|
|MLE||2||I will have a personal view of it.|
|In my case, it was very varied.||BLEU||0||I was very different from me.|
|SimiLe||4||For me, it was very different.|
|MLE||1||In me, it was very different.|
|We’re making the city liveable.||BLEU||0||We make the City of Life Life.|
|SimiLe||3||We make the city viable.|
|MLE||0||We make the City of Life.|
|The head of the White House said that the conversation was ridiculous.||BLEU||0||The White House chairman, the White House chip called a ridiculous.|
|SimiLe||4||The White House’s head, he described the conversation as ridiculous.|
|MLE||1||The White House chief, he called the White House, he called a ridiculous.|
|According to the former party leaders, so far the discussion “has been predominated by expressions of opinion based on emotions, without concrete arguments”.||BLEU||3||According to former party leaders, the debate has so far had to be ”elevated to an expression of opinion without concrete arguments.”|
|SimiLe||5||In the view of former party leaders, the debate has been based on emotions without specific arguments.”|
|MLE||4||In the view of former party leaders, in the debate, has been based on emotions without specific arguments.”|
|We are talking about the 21st century: servants.||BLEU||4||We are talking about the 21st century: servants.|
|SimiLe||1||In the 21st century, the 21st century is servants.|
|MLE||0||In the 21st century, the 21st century is servants.|
|Prof. Dr. Caglar continued:||BLEU||3||They also reminded them.|
|SimiLe||0||There are no Dr. Caglar.|
|MLE||3||They also reminded them.|
5.4 Lexical F1
We next attempt to elucidate exactly which parts of the translations are improving due to using SimiLe cost compared to using BLEU. We compute the F1 scores for target word types based on their frequency and their coarse part-of-speech tag (as labeled by SpaCy121212 https://github.com/explosion/spaCy) on the test sets for each language and show the results in Table 6.131313We use compare-mt Neubig et al. (2019) available at https://github.com/neulab/compare-mt.
From the table, we see that training with SimiLe helps produce low frequency words more accurately, a fact that is consistent with the part-of-speech tag analysis in the second part of the table. Wieting and Gimpel (2017) noted that highly discriminative parts-of-speech, such as nouns, proper nouns, and numbers, made the most contribution to the sentence embeddings. Other works Pham et al. (2015); Wieting et al. (2016) have also found that when training semantic embeddings using an averaging function, embeddings that bear the most information regarding the meaning have larger norms. We also see that these same parts-of-speech (nouns, proper nouns, numbers) have the largest difference in F1 scores between SimiLe and BLEU. Other parts-of-speech like symbols and interjections have high F1 scores as well, and words belonging to these classes are both relatively rare and highly discriminative regarding the semantics of the sentence.141414Note that in the data, interjections (INTJ) often correspond to words like Yes and No which tend to be very important regarding the semantics of the translation in these cases. In contrast, parts-of-speech that in general convey little semantic information and are more common, like determiners, show very little difference in F1 between the two approaches.
6 Qualitative Analysis
|Reference||Workers have begun to clean up in Röszke.||-||-||-||-|
|BLEU||Workers are beginning to clean up workers.||29.15||69.12||-||-|
|SimiLe||In Röszke, workers are beginning to clean up.||25.97||95.39||-3.18||26.27|
|Reference||All that stuff sure does take a toll.||-||-||-||-|
|BLEU||None of this takes a toll.||25.98||54.52||-||-|
|SimiLe||All of this is certain to take its toll.||18.85||77.20||-7.13||32.46|
|Reference||Another advantage is that they have fewer enemies.||-||-||-||-|
|BLEU||Another benefit : they have less enemies.||24.51||81.20||-||-|
|SimiLe||Another advantage: they have fewer enemies.||58.30||90.76||56.69||9.56|
|Reference||I don’t know how to explain - it’s really unique.||-||-||-||-|
|BLEU||I do not know how to explain it - it is really unique.||39.13||97.42||-||-|
|SimiLe||I don’t know how to explain - it is really unique.||78.25||99.57||39.12||2.15|
We show examples of the output of all three systems in Table 7 from the test sets, along with their human scores which are on a 0-5 scale. The first 5 examples show cases where SimiLe better captures the semantics than BLEU or MLE. In the first three, the SimiLe model adds a crucial word that the other two systems omit. This makes a significant difference in preserving the semantics of the translation. These words include verbs (tells), prepositions (For), adverbs (viable) and nouns (conversation). The fourth and fifth examples also show how SimiLe can lead to more fluent outputs and is effective on longer sentences.
The last two examples are failure cases of using SimiLe. In the first, it repeats a phrase, just as the MLE model does and is unable to smooth it out as the BLEU model is able to do. In the last example, SimiLe again tries to include words (Dr. Caglar) significant to the semantics of the sentence. However it misses on the rest of translation, despite being the only system to include this noun phrase.
7 Metric Comparison
We took all outputs of the validation set of the de-en data for our best SimiLe and BLEU models, as measured by BLEU validation scores, and we sorted the outputs by the following statistic:
where BLEU in this case refers to sentence-level BLEU. Examples of some of the highest and lowest scoring sentence pairs are shown in Table 8 along with the system they came from (either trained with a BLEU cost or SimiLe cost).
The top half of the table shows examples where the difference in SIM scores is large, but the difference in BLEU scores is small. From these examples, we see that when SIM scores are very different, there is a difference in the meanings of the generated sentences. However, when the BLEU scores are very close, this is not the case. In fact, in these examples, less accurate translations have higher BLEU scores than more accurate ones. In the first sentence, an important clause is left out (in Röszke) and in the second, the generated sentence from the BLEU system actually negates the reference, despite having a higher BLEU score than the sentence from the SimiLe system.
Conversely, the bottom half of the table shows examples where the difference in BLEU scores is large, but the difference in SIM scores is small. From these examples, we can see that when BLEU scores are very different, the semantics of the sentence can still be preserved. However, the SIM score of these generated sentences with the references are close to each other, as we would hope to see. These examples illustrate a well-known problem with BLEU where synonyms, punctuation changes, and other small deviations from the reference can have a large impact on the score. As can be seen from the examples, these are less of a problem for the SIM metric.
8 Related Work
The seminal work on training machine translation systems to optimize particular evaluation measures was performed by Och (2003), who introduced minimum error rate training (MERT) and used it to optimize several different metrics in statistical MT (SMT). This was followed by a large number of alternative methods for optimizing machine translation systems based on minimum risk Smith and Eisner (2006), maximum margin Watanabe et al. (2007), or ranking Hopkins and May (2011), among many others.
Within the context of SMT, there have also been studies on the stability of particular metrics for optimization. Cer et al. (2010) compared several metrics to optimize for SMT, finding BLEU to be robust as a training metric and finding that the most effective and most stable metrics for training are not necessarily the same as the best metrics for automatic evaluation. The WMT shared tasks included tunable metric tasks in 2011 (Callison-Burch et al., 2011) and again in 2015 (Stanojević et al., 2015) and 2016 (Jawaid et al., 2016). In these tasks, participants submitted metrics to optimize during training or combinations of metrics and optimizers, given a fixed SMT system. The 2011 results showed that nearly all metrics performed similarly to one another. The 2015 and 2016 results showed more variation among metrics, but also found that BLEU was a strong choice overall, echoing the results of Cer et al. (2010). We have shown that our metric stabilizes training for NMT more than BLEU, which is a promising result given the limited success of the broad spectrum of previous attempts to discover easily tunable metrics in the context of SMT.
Some researchers have found success in terms of improved human judgments when training to maximize metrics other than BLEU for SMT. Lo et al. (2013) and Beloucif et al. (2014) trained SMT systems to maximize variants of MEANT, a metric based on semantic roles. Liu et al. (2011) trained systems using TESLA, a family of metrics based on softly matching -grams using lemmas, WordNet synsets, and part-of-speech tags. We have demonstrated that our metric similarly leads to gains in performance as assessed by human annotators, and our method has an auxiliary advantage of being much simpler than these previous hand-engineered measures.
Shen et al. (2016) explored minimum risk training for NMT, finding that a sentence-level BLEU score led to the best performance even when evaluated under other metrics. These results differ from the usual results obtained for SMT systems, in which tuning to optimize a metric leads to the best performance on that metric (Och, 2003). Edunov et al. (2018) compared structured losses for NMT, also using sentence-level BLEU. They found risk to be an effective and robust choice, so we use risk as well in this paper.
We have proposed SimiLe, an alternative to BLEU for use as a reward in minimum risk training. We have found that SimiLe not only outperforms BLEU on automatic evaluations, it correlates better with human judgments as well. Our analysis also shows that using this metric eases optimization and the translations tend to be richer in correct, semantically important words.
This is the first time to our knowledge that a continuous metric of semantic similarity has been proposed for NMT optimization and shown to outperform sentence-level BLEU, and we hope that this can be the starting point for more research in this direction.
Appendix A Appendix
a.1 Annotation Instructions
Below are the annotation instructions used by translators for evaluation.
0. The meaning is completely different or the output is meaningless
1. The topic is the same but the meaning is different
2. Some key information is different
3. The key information is the same but the details differ
4. Meaning is essentially equal but some expressions are unnatural
5. Meaning is essentially equal and the two sentences are well-formed English
- Agirre et al. (2015) Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. SemEval-2015 task 2: Semantic textual similarity, English, Spanish and pilot on interpretability. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015).
- Agirre et al. (2014) Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. SemEval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014).
- Agirre et al. (2016) Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. Proceedings of SemEval, pages 497–511.
- Agirre et al. (2013) Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, and Weiwei Guo. 2013. * sem 2013 shared task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, volume 1, pages 32–43.
- Agirre et al. (2012) Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation. Association for Computational Linguistics.
- Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations.
- Beloucif et al. (2014) Meriem Beloucif, Chi-kiu Lo, and Dekai Wu. 2014. Improving MEANT based semantically tuned SMT. In Proceedings of 11th International Workshop on Spoken Language Translation (IWSLT 2014).
- Bojar et al. (2017) Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, et al. 2017. Findings of the 2017 conference on machine translation (wmt17). In Proceedings of the Second Conference on Machine Translation, pages 169–214.
- Bojar et al. (2018) Ondřej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (wmt18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272–303. Association for Computational Linguistics.
- Callison-Burch et al. (2011) Chris Callison-Burch, Philipp Koehn, Christof Monz, and Omar Zaidan. 2011. Findings of the 2011 workshop on statistical machine translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 22–64. Association for Computational Linguistics.
- Cer et al. (2010) Daniel Cer, Christopher D. Manning, and Daniel Jurafsky. 2010. The best lexical metric for phrase-based statistical mt system optimization. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 555–563. Association for Computational Linguistics.
Conneau et al. (2017)
Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine
Supervised learning of universal sentence representations from
natural language inference data.
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680, Copenhagen, Denmark.
- Denkowski and Lavie (2014) Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation.
- Edunov et al. (2018) Sergey Edunov, Myle Ott, Michael Auli, David Grangier, et al. 2018. Classical structured prediction losses for sequence to sequence learning. In Proceedings of NAACL, volume 1, pages 355–364.
- Gehring et al. (2017) Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122.
Hill et al. (2016)
Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016.
Learning distributed representations of sentences from unlabelled data.In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
- Hopkins and May (2011) Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1352–1362. Association for Computational Linguistics.
- Jawaid et al. (2016) Bushra Jawaid, Amir Kamran, Miloš Stanojević, and Ondřej Bojar. 2016. Results of the wmt16 tuning shared task. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 232–238. Association for Computational Linguistics.
- Koehn (2004) Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 388–395.
- Le and Mikolov (2014) Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053.
- Lin and Och (2004) Chin-Yew Lin and Franz Josef Och. 2004. Orange: a method for evaluating automatic evaluation metrics for machine translation. In Proceedings of the Conference on Computational Linguistics, pages 501–507.
- Liu et al. (2011) Chang Liu, Daniel Dahlmeier, and Hwee Tou Ng. 2011. Better evaluation metrics lead to better machine translation. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 375–384, Edinburgh, Scotland, UK. Association for Computational Linguistics.
- Lo et al. (2013) Chi-kiu Lo, Karteek Addanki, Markus Saers, and Dekai Wu. 2013. Improving machine translation by training against an automatic semantic frame based evaluation metric. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 375–381. Association for Computational Linguistics.
- Neubig et al. (2019) Graham Neubig, Zi-Yi Dou, Junjie Hu, Paul Michel, Danish Pruthi, Xinyi Wang, and John Wieting. 2019. compare-mt: A tool for holistic comparison of language generation systems. In Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL) Demo Track, Minneapolis, USA.
- Och (2003) Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics.
- Pagliardini et al. (2017) Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2017. Unsupervised learning of sentence embeddings using compositional n-gram features. arXiv preprint arXiv:1703.02507.
- Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA.
- Pascanu et al. (2012) Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2012. On the difficulty of training recurrent neural networks. arXiv preprint arXiv:1211.5063.
- Pereyra et al. (2017) Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton. 2017. Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548.
- Pham et al. (2015) Nghia The Pham, Germán Kruszewski, Angeliki Lazaridou, and Marco Baroni. 2015. Jointly optimizing word representations for lexical and sentential tasks with the c-phrase model. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers).
Ranzato et al. (2016)
Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremb. 2016.
Sequence level training with recurrent neural networks.In Proceedings of the 4th International Conference on Learning Representations.
- Sennrich et al. (2015) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.
- Shen et al. (2015) Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2015. Minimum risk training for neural machine translation. arXiv preprint arXiv:1512.02433.
- Shen et al. (2016) Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1683–1692. Association for Computational Linguistics.
- Smith and Eisner (2006) David A. Smith and Jason Eisner. 2006. Minimum risk annealing for training log-linear models. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 787–794. Association for Computational Linguistics.
- Stanojević et al. (2015) Miloš Stanojević, Amir Kamran, and Ondřej Bojar. 2015. Results of the wmt15 tuning shared task. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 274–281. Association for Computational Linguistics.
Sutskever et al. (2013)
Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. 2013.
On the importance of initialization and momentum in deep learning.In
International conference on machine learning, pages 1139–1147.
Szegedy et al. (2016)
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew
Rethinking the inception architecture for computer vision.In
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826.
- Watanabe et al. (2007) Taro Watanabe, Jun Suzuki, Hajime Tsukada, and Hideki Isozaki. 2007. Online large-margin training for statistical machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL).
- Wieting et al. (2016) John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sentence embeddings. In Proceedings of the International Conference on Learning Representations.
- Wieting and Gimpel (2017) John Wieting and Kevin Gimpel. 2017. Revisiting recurrent networks for paraphrastic sentence embeddings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2078–2088, Vancouver, Canada.
- Wieting and Gimpel (2018) John Wieting and Kevin Gimpel. 2018. ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462, Melbourne, Australia. Association for Computational Linguistics.
- Wieting et al. (2019) John Wieting, Kevin Gimpel, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. Simple and effective paraphrastic similarity from parallel translations. In Proceedings of ACL.
- Wiseman and Rush (2016) Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search optimization. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1296–1306, Austin, Texas. Association for Computational Linguistics.