Learning to Parse and Translate Improves Neural Machine Translation

02/12/2017
by   Akiko Eriguchi, et al.
0

There has been relatively little attention to incorporating linguistic prior to neural machine translation. Much of the previous work was further constrained to considering linguistic prior on the source side. In this paper, we propose a hybrid model, called NMT+RNNG, that learns to parse and translate by combining the recurrent neural network grammar into the attention-based neural machine translation. Our approach encourages the neural machine translation model to incorporate linguistic prior during training, and lets it translate on its own afterward. Extensive experiments with four language pairs show the effectiveness of the proposed NMT+RNNG.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2020

Neural Machine Translation System of Indic Languages – An Attention based Approach

Neural machine translation (NMT) is a recent and effective technique whi...
research
10/03/2016

Learning to Translate in Real-time with Neural Machine Translation

Translating in real-time, a.k.a. simultaneous translation, outputs trans...
research
06/25/2020

Modeling Baroque Two-Part Counterpoint with Neural Machine Translation

We propose a system for contrapuntal music generation based on a Neural ...
research
05/18/2019

A Case Study: Exploiting Neural Machine Translation to Translate CUDA to OpenCL

The sequence-to-sequence (seq2seq) model for neural machine translation ...
research
07/08/2019

Correct-and-Memorize: Learning to Translate from Interactive Revisions

State-of-the-art machine translation models are still not on par with hu...
research
07/09/2017

Neural Machine Translation between Herbal Prescriptions and Diseases

The current study applies deep learning to herbalism. Toward the goal, w...

Please sign up or login with your details

Forgot password? Click here to reset