Secoco: Self-Correcting Encoding for Neural Machine Translation

08/27/2021
by   PetsTime, et al.
4

This paper presents Self-correcting Encoding (Secoco), a framework that effectively deals with input noise for robust neural machine translation by introducing self-correcting predictors. Different from previous robust approaches, Secoco enables NMT to explicitly correct noisy inputs and delete specific errors simultaneously with the translation decoding process. Secoco is able to achieve significant improvements over strong baselines on two real-world test sets and a benchmark WMT dataset with good interpretability. We will make our code and dataset publicly available soon.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/13/2023

Prompting Neural Machine Translation with Translation Memories

Improving machine translation (MT) systems with translation memories (TM...
research
06/14/2020

FFR v1.1: Fon-French Neural Machine Translation

All over the world and especially in Africa, researchers are putting eff...
research
11/11/2019

Diversity by Phonetics and its Application in Neural Machine Translation

We introduce a powerful approach for Neural Machine Translation (NMT), w...
research
04/24/2019

Assessing the Tolerance of Neural Machine Translation Systems Against Speech Recognition Errors

Machine translation systems are conventionally trained on textual resour...
research
04/16/2018

Can Neural Machine Translation be Improved with User Feedback?

We present the first real-world application of methods for improving neu...
research
10/24/2018

Learning to Discriminate Noises for Incorporating External Information in Neural Machine Translation

Previous studies show that incorporating external information could impr...
research
07/21/2017

SGNMT -- A Flexible NMT Decoding Platform for Quick Prototyping of New Models and Search Strategies

This paper introduces SGNMT, our experimental platform for machine trans...

Please sign up or login with your details

Forgot password? Click here to reset