Estimating post-editing effort: a study on human judgements, task-based and reference-based metrics of MT quality

10/14/2019
by   Carolina Scarton, et al.
0

Devising metrics to assess translation quality has always been at the core of machine translation (MT) research. Traditional automatic reference-based metrics, such as BLEU, have shown correlations with human judgements of adequacy and fluency and have been paramount for the advancement of MT system development. Crowd-sourcing has popularised and enabled the scalability of metrics based on human judgements, such as subjective direct assessments (DA) of adequacy, that are believed to be more reliable than reference-based automatic metrics. Finally, task-based measurements, such as post-editing time, are expected to provide a more detailed evaluation of the usefulness of translations for a specific task. Therefore, while DA averages adequacy judgements to obtain an appraisal of (perceived) quality independently of the task, and reference-based automatic metrics try to objectively estimate quality also in a task-independent way, task-based metrics are measurements obtained either during or after performing a specific task. In this paper we argue that, although expensive, task-based measurements are the most reliable when estimating MT quality in a specific task; in our case, this task is post-editing. To that end, we report experiments on a dataset with newly-collected post-editing indicators and show their usefulness when estimating post-editing effort. Our results show that task-based metrics comparing machine-translated and post-edited versions are the best at tracking post-editing effort, as expected. These metrics are followed by DA, and then by metrics comparing the machine-translated version and independent references. We suggest that MT practitioners should be aware of these differences and acknowledge their implications when deciding how to evaluate MT for post-editing purposes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/25/2021

Automatic Post-Editing for Translating Chinese Novels to Vietnamese

Automatic post-editing (APE) is an important remedy for reducing errors ...
research
05/11/2022

SubER: A Metric for Automatic Evaluation of Subtitle Quality

This paper addresses the problem of evaluating the quality of automatica...
research
12/18/2021

Assessing Post-editing Effort in the English-Hindi Direction

We present findings from a first in-depth post-editing effort estimation...
research
10/25/2022

Exploring Document-Level Literary Machine Translation with Parallel Paragraphs from World Literature

Literary translation is a culturally significant task, but it is bottlen...
research
08/16/2019

The Transference Architecture for Automatic Post-Editing

In automatic post-editing (APE) it makes sense to condition post-editing...
research
10/20/2022

Searching for a higher power in the human evaluation of MT

In MT evaluation, pairwise comparisons are conducted to identify the bet...
research
08/15/2019

Transformer-based Automatic Post-Editing with a Context-Aware Encoding Approach for Multi-Source Inputs

Recent approaches to the Automatic Post-Editing (APE) research have show...

Please sign up or login with your details

Forgot password? Click here to reset