DeepAI AI Chat
Log In Sign Up

Ensembling Factored Neural Machine Translation Models for Automatic Post-Editing and Quality Estimation

by   Chris Hokamp, et al.
Dublin City University

This work presents a novel approach to Automatic Post-Editing (APE) and Word-Level Quality Estimation (QE) using ensembles of specialized Neural Machine Translation (NMT) systems. Word-level features that have proven effective for QE are included as input factors, expanding the representation of the original source and the machine translation hypothesis, which are used to generate an automatically post-edited hypothesis. We train a suite of NMT models that use different input representations, but share the same output space. These models are then ensembled together, and tuned for both the APE and the QE task. We thus attempt to connect the state-of-the-art approaches to APE and QE within a single framework. Our models achieve state-of-the-art results in both tasks, with the only difference in the tuning step which learns weights for each component of the ensemble.


Leveraging GPT-4 for Automatic Translation Post-Editing

While Neural Machine Translation (NMT) represents the leading approach t...

Can Automatic Post-Editing Improve NMT?

Automatic post-editing (APE) aims to improve machine translations, there...

A User-Study on Online Adaptation of Neural Machine Translation to Human Post-Edits

The advantages of neural machine translation (NMT) have been extensively...

DivEMT: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages

We introduce DivEMT, the first publicly available post-editing study of ...

Incremental Adaptation of NMT for Professional Post-editors: A User Study

A common use of machine translation in the industry is providing initial...

The Curious Case of Hallucinations in Neural Machine Translation

In this work, we study hallucinations in Neural Machine Translation (NMT...