Improve the Evaluation of Fluency Using Entropy for Machine Translation Evaluation Metrics

08/10/2015
by   Hui Yu, et al.
0

The widely-used automatic evaluation metrics cannot adequately reflect the fluency of the translations. The n-gram-based metrics, like BLEU, limit the maximum length of matched fragments to n and cannot catch the matched fragments longer than n, so they can only reflect the fluency indirectly. METEOR, which is not limited by n-gram, uses the number of matched chunks but it does not consider the length of each chunk. In this paper, we propose an entropy-based method, which can sufficiently reflect the fluency of translations through the distribution of matched words. This method can easily combine with the widely-used automatic evaluation metrics to improve the evaluation of fluency. Experiments show that the correlations of BLEU and METEOR are improved on sentence level after combining with the entropy-based method on WMT 2010 and WMT 2012.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2023

BLEU Meets COMET: Combining Lexical and Neural Metrics Towards Robust Machine Translation Evaluation

Although neural-based machine translation evaluation metrics, such as CO...
research
08/25/2023

Training and Meta-Evaluating Machine Translation Evaluation Metrics at the Paragraph Level

As research on machine translation moves to translating text beyond the ...
research
06/11/2020

Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics

Automatic metrics are fundamental for the development and evaluation of ...
research
07/21/2017

Why We Need New Evaluation Metrics for NLG

The majority of NLG evaluation relies on automatic metrics, such as BLEU...
research
07/06/2023

BLEURT Has Universal Translations: An Analysis of Automatic Metrics by Minimum Risk Training

Automatic metrics play a crucial role in machine translation. Despite th...
research
04/15/2021

Rethinking Automatic Evaluation in Sentence Simplification

Automatic evaluation remains an open research question in Natural Langua...
research
10/01/2022

FRMT: A Benchmark for Few-Shot Region-Aware Machine Translation

We present FRMT, a new dataset and evaluation benchmark for Few-shot Reg...

Please sign up or login with your details

Forgot password? Click here to reset