DeepAI AI Chat
Log In Sign Up

Ensembling Factored Neural Machine Translation Models for Automatic Post-Editing and Quality Estimation

06/15/2017
by   Chris Hokamp, et al.
Dublin City University
0

This work presents a novel approach to Automatic Post-Editing (APE) and Word-Level Quality Estimation (QE) using ensembles of specialized Neural Machine Translation (NMT) systems. Word-level features that have proven effective for QE are included as input factors, expanding the representation of the original source and the machine translation hypothesis, which are used to generate an automatically post-edited hypothesis. We train a suite of NMT models that use different input representations, but share the same output space. These models are then ensembled together, and tuned for both the APE and the QE task. We thus attempt to connect the state-of-the-art approaches to APE and QE within a single framework. Our models achieve state-of-the-art results in both tasks, with the only difference in the tuning step which learns weights for each component of the ensemble.

READ FULL TEXT
05/24/2023

Leveraging GPT-4 for Automatic Translation Post-Editing

While Neural Machine Translation (NMT) represents the leading approach t...
09/30/2020

Can Automatic Post-Editing Improve NMT?

Automatic post-editing (APE) aims to improve machine translations, there...
12/13/2017

A User-Study on Online Adaptation of Neural Machine Translation to Human Post-Edits

The advantages of neural machine translation (NMT) have been extensively...
05/24/2022

DivEMT: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages

We introduce DivEMT, the first publicly available post-editing study of ...
06/21/2019

Incremental Adaptation of NMT for Professional Post-editors: A User Study

A common use of machine translation in the industry is providing initial...
04/14/2021

The Curious Case of Hallucinations in Neural Machine Translation

In this work, we study hallucinations in Neural Machine Translation (NMT...