DeepAI
Log In Sign Up

Macro-Average: Rare Types Are Important Too

04/12/2021
by   Thamme Gowda, et al.
2

While traditional corpus-level evaluation metrics for machine translation (MT) correlate well with fluency, they struggle to reflect adequacy. Model-based MT metrics trained on segment-level human judgments have emerged as an attractive replacement due to strong correlation results. These models, however, require potentially expensive re-training for new domains and languages. Furthermore, their decisions are inherently non-transparent and appear to reflect unwelcome biases. We explore the simple type-based classifier metric, MacroF1, and study its applicability to MT evaluation. We find that MacroF1 is competitive on direct assessment, and outperforms others in indicating downstream cross-lingual information retrieval task performance. Further, we show that MacroF1 can be used to effectively compare supervised and unsupervised neural machine translation, and reveal significant qualitative differences in the methods' outputs.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/18/2020

COMET: A Neural Framework for MT Evaluation

We present COMET, a neural framework for training multilingual machine t...
02/21/2022

USCORE: An Effective Approach to Fully Unsupervised Evaluation Metrics for Machine Translation

The vast majority of evaluation metrics for machine translation are supe...
06/27/2019

The Impact of Preprocessing on Arabic-English Statistical and Neural Machine Translation

Neural networks have become the state-of-the-art approach for machine tr...
10/27/2022

ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics

As machine translation (MT) metrics improve their correlation with human...
05/09/2022

Building Machine Translation Systems for the Next Thousand Languages

In this paper we share findings from our effort to build practical machi...
09/13/2021

Evaluating Multiway Multilingual NMT in the Turkic Languages

Despite the increasing number of large and comprehensive machine transla...
09/15/2021

Regressive Ensemble for Machine Translation Quality Evaluation

This work introduces a simple regressive ensemble for evaluating machine...