The Devil is in the Errors: Leveraging Large Language Models for Fine-grained Machine Translation Evaluation

08/14/2023
by   Patrick Fernandes, et al.
0

Automatic evaluation of machine translation (MT) is a critical tool driving the rapid iterative development of MT systems. While considerable progress has been made on estimating a single scalar quality score, current metrics lack the informativeness of more detailed schemes that annotate individual errors, such as Multidimensional Quality Metrics (MQM). In this paper, we help fill this gap by proposing AutoMQM, a prompting technique which leverages the reasoning and in-context learning capabilities of large language models (LLMs) and asks them to identify and categorize errors in translations. We start by evaluating recent LLMs, such as PaLM and PaLM-2, through simple score prediction prompting, and we study the impact of labeled data through in-context learning and finetuning. We then evaluate AutoMQM with PaLM-2 models, and we find that it improves performance compared to just prompting for scores (with particularly large gains for larger models) while providing interpretability through error spans that align with human annotations.

READ FULL TEXT
research
12/20/2022

IndicMT Eval: A Dataset to Meta-Evaluate Machine Translation metrics for Indian Languages

The rapid growth of machine translation (MT) systems has necessitated co...
research
03/24/2023

Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models: A Case Study on ChatGPT

Generative large language models (LLMs), e.g., ChatGPT, have demonstrate...
research
06/18/2023

MISMATCH: Fine-grained Evaluation of Machine-generated Text with Mismatch Error Types

With the growing interest in large language models, the need for evaluat...
research
05/12/2023

Perturbation-based QE: An Explainable, Unsupervised Word-level Quality Estimation Method for Blackbox Machine Translation

Quality Estimation (QE) is the task of predicting the quality of Machine...
research
09/29/2021

BLEU, METEOR, BERTScore: Evaluation of Metrics Performance in Assessing Critical Translation Errors in Sentiment-oriented Text

Social media companies as well as authorities make extensive use of arti...
research
05/20/2022

SALTED: A Framework for SAlient Long-Tail Translation Error Detection

Traditional machine translation (MT) metrics provide an average measure ...

Please sign up or login with your details

Forgot password? Click here to reset