Lexicons and Minimum Risk Training for Neural Machine Translation: NAIST-CMU at WAT2016

10/20/2016
by   Graham Neubig, et al.
0

This year, the Nara Institute of Science and Technology (NAIST)/Carnegie Mellon University (CMU) submission to the Japanese-English translation track of the 2016 Workshop on Asian Translation was based on attentional neural machine translation (NMT) models. In addition to the standard NMT model, we make a number of improvements, most notably the use of discrete translation lexicons to improve probability estimates, and the use of minimum risk training to optimize the MT system for BLEU score. As a result, our system achieved the highest translation evaluation scores for the task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/13/2023

Prompting Neural Machine Translation with Translation Memories

Improving machine translation (MT) systems with translation memories (TM...
research
01/07/2017

Neural Machine Translation on Scarce-Resource Condition: A case-study on Persian-English

Neural Machine Translation (NMT) is a new approach for Machine Translati...
research
12/08/2015

Minimum Risk Training for Neural Machine Translation

We propose minimum risk training for end-to-end neural machine translati...
research
10/18/2015

Neural Reranking Improves Subjective Quality of Machine Translation: NAIST at WAT2015

This year, the Nara Institute of Science and Technology (NAIST)'s submis...
research
11/05/2019

Training Neural Machine Translation (NMT) Models using Tensor Train Decomposition on TensorFlow (T3F)

We implement a Tensor Train layer in the TensorFlow Neural Machine Trans...
research
12/19/2016

Boosting Neural Machine Translation

Training efficiency is one of the main problems for Neural Machine Trans...
research
06/08/2018

Findings of the Second Workshop on Neural Machine Translation and Generation

This document describes the findings of the Second Workshop on Neural Ma...

Please sign up or login with your details

Forgot password? Click here to reset