DeepAI
Log In Sign Up

An Exploration of Neural Sequence-to-Sequence Architectures for Automatic Post-Editing

06/13/2017
by   Marcin Junczys-Dowmunt, et al.
0

In this work, we explore multiple neural architectures adapted for the task of automatic post-editing of machine translation output. We focus on neural end-to-end models that combine both inputs mt (raw MT output) and src (source language input) in a single neural architecture, modeling {mt, src}→ pe directly. Apart from that, we investigate the influence of hard-attention models which seem to be well-suited for monolingual tasks, as well as combinations of both ideas. We report results on data sets provided during the WMT-2016 shared task on automatic post-editing and can demonstrate that dual-attention models that incorporate all available data in the APE scenario in a single model improve on the best shared task system and on all other published results after the shared task. Dual-attention models that are combined with hard attention remain competitive despite applying fewer changes to the input.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/25/2021

Automatic Post-Editing for Translating Chinese Novels to Vietnamese

Automatic post-editing (APE) is an important remedy for reducing errors ...
07/01/2018

A Shared Attention Mechanism for Interpretation of Neural Automatic Post-Editing Systems

Automatic post-editing (APE) systems aim to correct the systematic error...
10/18/2019

Automatic Post-Editing for Machine Translation

Automatic Post-Editing (APE) aims to correct systematic errors in a mach...
07/17/2017

LIG-CRIStAL System for the WMT17 Automatic Post-Editing Task

This paper presents the LIG-CRIStAL submission to the shared Automatic P...
08/15/2019

Transformer-based Automatic Post-Editing with a Context-Aware Encoding Approach for Multi-Source Inputs

Recent approaches to the Automatic Post-Editing (APE) research have show...
04/21/2017

Attention Strategies for Multi-Source Sequence-to-Sequence Learning

Modeling attention in neural multi-source sequence-to-sequence learning ...