DeepAI
Log In Sign Up

HilMeMe: A Human-in-the-Loop Machine Translation Evaluation Metric Looking into Multi-Word Expressions

11/09/2022
by   Lifeng Han, et al.
0

With the fast development of Machine Translation (MT) systems, especially the new boost from Neural MT (NMT) models, the MT output quality has reached a new level of accuracy. However, many researchers criticised that the current popular evaluation metrics such as BLEU can not correctly distinguish the state-of-the-art NMT systems regarding quality differences. In this short paper, we describe the design and implementation of a linguistically motivated human-in-the-loop evaluation metric looking into idiomatic and terminological Multi-word Expressions (MWEs). MWEs have played a bottleneck in many Natural Language Processing (NLP) tasks including MT. MWEs can be used as one of the main factors to distinguish different MT systems by looking into their capabilities in recognising and translating MWEs in an accurate and meaning equivalent manner.

READ FULL TEXT
01/15/2018

What Level of Quality can Neural Machine Translation Attain on Literary Text?

Given the rise of a new approach to MT, Neural MT (NMT), and its promisi...
01/30/2019

Reference-less Quality Estimation of Text Simplification Systems

The evaluation of text simplification (TS) systems remains an open chall...
04/20/2022

Evaluating Commit Message Generation: To BLEU Or Not To BLEU?

Commit messages play an important role in several software engineering t...
07/30/2021

Difficulty-Aware Machine Translation Evaluation

The high-quality translation results produced by machine translation (MT...
03/26/2017

LEPOR: An Augmented Machine Translation Evaluation Metric

Machine translation (MT) was developed as one of the hottest research to...
04/09/2021

Chinese Character Decomposition for Neural MT with Multi-Word Expressions

Chinese character decomposition has been used as a feature to enhance Ma...
04/12/2021

Macro-Average: Rare Types Are Important Too

While traditional corpus-level evaluation metrics for machine translatio...