Extrinsic Evaluation of Machine Translation Metrics

12/20/2022
by   Nikita Moghe, et al.
0

Automatic machine translation (MT) metrics are widely used to distinguish the translation qualities of machine translation systems across relatively large test sets (system-level evaluation). However, it is unclear if automatic metrics are reliable at distinguishing good translations from bad translations at the sentence level (segment-level evaluation). In this paper, we investigate how useful MT metrics are at detecting the success of a machine translation component when placed in a larger platform with a downstream task. We evaluate the segment-level performance of the most widely used MT metrics (chrF, COMET, BERTScore, etc.) on three downstream cross-lingual tasks (dialogue state tracking, question answering, and semantic parsing). For each task, we only have access to a monolingual task-specific model. We calculate the correlation between the metric's ability to predict a good/bad translation with the success/failure on the final task for the Translate-Test setup. Our experiments demonstrate that all metrics exhibit negligible correlation with the extrinsic evaluation of the downstream outcomes. We also find that the scores provided by neural metrics are not interpretable mostly because of undefined ranges. Our analysis suggests that future MT metrics be designed to produce error labels rather than scores to facilitate extrinsic evaluation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/12/2021

Macro-Average: Rare Types Are Important Too

While traditional corpus-level evaluation metrics for machine translatio...
research
07/30/2021

Difficulty-Aware Machine Translation Evaluation

The high-quality translation results produced by machine translation (MT...
research
11/16/2022

MT Metrics Correlate with Human Ratings of Simultaneous Speech Translation

There have been several studies on the correlation between human ratings...
research
11/28/2019

DiscoTK: Using Discourse Structure for Machine Translation Evaluation

We present novel automatic metrics for machine translation evaluation th...
research
10/27/2022

ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics

As machine translation (MT) metrics improve their correlation with human...
research
02/10/2022

Identifying Weaknesses in Machine Translation Metrics Through Minimum Bayes Risk Decoding: A Case Study for COMET

Neural metrics have achieved impressive correlation with human judgement...
research
04/10/2023

DISTO: Evaluating Textual Distractors for Multi-Choice Questions using Negative Sampling based Approach

Multiple choice questions (MCQs) are an efficient and common way to asse...

Please sign up or login with your details

Forgot password? Click here to reset