An Exploration of Neural Sequence-to-Sequence Architectures for Automatic Post-Editing

06/13/2017
by   Marcin Junczys-Dowmunt, et al.
0

In this work, we explore multiple neural architectures adapted for the task of automatic post-editing of machine translation output. We focus on neural end-to-end models that combine both inputs mt (raw MT output) and src (source language input) in a single neural architecture, modeling {mt, src}→ pe directly. Apart from that, we investigate the influence of hard-attention models which seem to be well-suited for monolingual tasks, as well as combinations of both ideas. We report results on data sets provided during the WMT-2016 shared task on automatic post-editing and can demonstrate that dual-attention models that incorporate all available data in the APE scenario in a single model improve on the best shared task system and on all other published results after the shared task. Dual-attention models that are combined with hard attention remain competitive despite applying fewer changes to the input.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/16/2016

Log-linear Combinations of Monolingual and Bilingual Neural Machine Translation Models for Automatic Post-Editing

This paper describes the submission of the AMU (Adam Mickiewicz Universi...
research
04/25/2021

Automatic Post-Editing for Translating Chinese Novels to Vietnamese

Automatic post-editing (APE) is an important remedy for reducing errors ...
research
07/01/2018

A Shared Attention Mechanism for Interpretation of Neural Automatic Post-Editing Systems

Automatic post-editing (APE) systems aim to correct the systematic error...
research
07/17/2017

LIG-CRIStAL System for the WMT17 Automatic Post-Editing Task

This paper presents the LIG-CRIStAL submission to the shared Automatic P...
research
08/15/2019

Transformer-based Automatic Post-Editing with a Context-Aware Encoding Approach for Multi-Source Inputs

Recent approaches to the Automatic Post-Editing (APE) research have show...
research
04/21/2017

Attention Strategies for Multi-Source Sequence-to-Sequence Learning

Modeling attention in neural multi-source sequence-to-sequence learning ...
research
08/16/2019

The Transference Architecture for Automatic Post-Editing

In automatic post-editing (APE) it makes sense to condition post-editing...

Please sign up or login with your details

Forgot password? Click here to reset