DeepAI
Log In Sign Up

Reward Optimization for Neural Machine Translation with Learned Metrics

04/15/2021
by   Raphael Shu, et al.
0

Neural machine translation (NMT) models are conventionally trained with token-level negative log-likelihood (NLL), which does not guarantee that the generated translations will be optimized for a selected sequence-level evaluation metric. Multiple approaches are proposed to train NMT with BLEU as the reward, in order to directly improve the metric. However, it was reported that the gain in BLEU does not translate to real quality improvement, limiting the application in industry. Recently, it became clear to the community that BLEU has a low correlation with human judgment when dealing with state-of-the-art models. This leads to the emerging of model-based evaluation metrics. These new metrics are shown to have a much higher human correlation. In this paper, we investigate whether it is beneficial to optimize NMT models with the state-of-the-art model-based metric, BLEURT. We propose a contrastive-margin loss for fast and stable reward optimization suitable for large NMT models. In experiments, we perform automatic and human evaluations to compare models trained with smoothed BLEU and BLEURT to the baseline models. Results show that the reward optimization with BLEURT is able to increase the metric scores by a large margin, in contrast to limited gain when training with smoothed BLEU. The human evaluation shows that models trained with BLEURT improve adequacy and coverage of translations. Code is available via https://github.com/naver-ai/MetricMT.

READ FULL TEXT
09/14/2019

Beyond BLEU: Training Neural Machine Translation with Semantic Similarity

While most neural machine translation (NMT) systems are still trained us...
07/24/2018

Otem&Utem: Over- and Under-Translation Evaluation Metric for NMT

Although neural machine translation(NMT) yields promising translation pe...
05/04/2020

Using Context in Neural Machine Translation Training Objectives

We present Neural Machine Translation (NMT) training using document-leve...
05/24/2021

Prevent the Language Model from being Overconfident in Neural Machine Translation

The Neural Machine Translation (NMT) model is essentially a joint langua...
10/20/2020

Human-Paraphrased References Improve Neural Machine Translation

Automatic evaluation comparing candidate translations to human-generated...
11/26/2020

Decoding and Diversity in Machine Translation

Neural Machine Translation (NMT) systems are typically evaluated using a...
02/26/2022

Relational Surrogate Loss Learning

Evaluation metrics in machine learning are often hardly taken as loss fu...

Code Repositories

MetricMT

The official code repository for MetricMT - a reward optimization method for NMT with learned metrics


view repo