To Ship or Not to Ship: An Extensive Evaluation of Automatic Metrics for Machine Translation

07/22/2021
by   Tom Kocmi, et al.
0

Automatic metrics are commonly used as the exclusive tool for declaring the superiority of one machine translation system's quality over another. The community choice of automatic metric guides research directions and industrial developments by deciding which models are deemed better. Evaluating metrics correlations has been limited to a small collection of human judgements. In this paper, we corroborate how reliable metrics are in contrast to human judgements on - to the best of our knowledge - the largest collection of human judgements. We investigate which metrics have the highest accuracy to make system-level quality rankings for pairs of systems, taking human judgement as a gold standard, which is the closest scenario to the real metric usage. Furthermore, we evaluate the performance of various metrics across different language pairs and domains. Lastly, we show that the sole use of BLEU negatively affected the past development of improved models. We release the collection of human judgements of 4380 systems, and 2.3 M annotated sentences for further analysis and replication of our work.

READ FULL TEXT

page 16

page 18

research
06/11/2020

Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics

Automatic metrics are fundamental for the development and evaluation of ...
research
05/27/2021

Online Learning Meets Machine Translation Evaluation: Finding the Best Systems with the Least Human Effort

In Machine Translation, assessing the quality of a large amount of autom...
research
09/28/2022

An Automatic Evaluation of the WMT22 General Machine Translation Task

This report presents an automatic evaluation of the general machine tran...
research
04/15/2021

Rethinking Automatic Evaluation in Sentence Simplification

Automatic evaluation remains an open research question in Natural Langua...
research
10/09/2020

Evaluating and Characterizing Human Rationales

Two main approaches for evaluating the quality of machine-generated rati...
research
04/29/2021

Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation

Human evaluation of modern high-quality machine translation systems is a...
research
01/12/2016

Comparison and Adaptation of Automatic Evaluation Metrics for Quality Assessment of Re-Speaking

Re-speaking is a mechanism for obtaining high quality subtitles for use ...

Please sign up or login with your details

Forgot password? Click here to reset