HOPE: A Task-Oriented and Human-Centric Evaluation Framework Using Professional Post-Editing Towards More Effective MT Evaluation

12/27/2021
by   Serge Gladkoff, et al.
0

Traditional automatic evaluation metrics for machine translation have been widely criticized by linguists due to their low accuracy, lack of transparency, focus on language mechanics rather than semantics, and low agreement with human quality evaluation. Human evaluations in the form of MQM-like scorecards have always been carried out in real industry setting by both clients and translation service providers (TSPs). However, traditional human translation quality evaluations are costly to perform and go into great linguistic detail, raise issues as to inter-rater reliability (IRR) and are not designed to measure quality of worse than premium quality translations. In this work, we introduce HOPE, a task-oriented and human-centric evaluation framework for machine translation output based on professional post-editing annotations. It contains only a limited number of commonly occurring error types, and use a scoring model with geometric progression of error penalty points (EPPs) reflecting error severity level to each translation unit. The initial experimental work carried out on English-Russian language pair MT outputs on marketing content type of text from highly technical domain reveals that our evaluation framework is quite effective in reflecting the MT output quality regarding both overall system-level performance and segment-level transparency, and it increases the IRR for error type interpretation. The approach has several key advantages, such as ability to measure and compare less than perfect MT output from different systems, ability to indicate human perception of quality, immediate estimation of the labor effort required to bring MT output to premium quality, low-cost and faster application, as well as higher IRR. Our experimental data is available at <https://github.com/lHan87/HOPE>.

READ FULL TEXT
research
09/10/2021

Neural Machine Translation Quality and Post-Editing Performance

We test the natural expectation that using MT in professional translatio...
research
10/25/2022

Exploring Document-Level Literary Machine Translation with Parallel Paragraphs from World Literature

Literary translation is a culturally significant task, but it is bottlen...
research
03/24/2023

Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models: A Case Study on ChatGPT

Generative large language models (LLMs), e.g., ChatGPT, have demonstrate...
research
05/02/2020

Practical Perspectives on Quality Estimation for Machine Translation

Sentence level quality estimation (QE) for machine translation (MT) atte...
research
05/15/2016

Machine Translation Evaluation: A Survey

We introduce the Machine Translation (MT) evaluation survey that contain...
research
11/15/2021

Measuring Uncertainty in Translation Quality Evaluation (TQE)

From both human translators (HT) and machine translation (MT) researcher...
research
09/19/2022

A Snapshot into the Possibility of Video Game Machine Translation

We present in this article what we believe to be one of the first attemp...

Please sign up or login with your details

Forgot password? Click here to reset