Multi-Source Neural Translation

01/05/2016
by   Barret Zoph, et al.
0

We build a multi-source machine translation model and train it to maximize the probability of a target English string given French and German sources. Using the neural encoder-decoder framework, we explore several combination methods and report up to +4.8 Bleu increases on top of a very strong attention-based neural translation model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2018

Scheduled Multi-Task Learning: From Syntax to Translation

Neural encoder-decoder models of machine translation have achieved impre...
research
08/17/2015

Effective Approaches to Attention-based Neural Machine Translation

An attentional mechanism has lately been used to improve neural machine ...
research
04/24/2019

Low-Memory Neural Network Training: A Technical Report

Memory is increasingly often the bottleneck when training neural network...
research
08/28/2018

The University of Cambridge's Machine Translation Systems for WMT18

The University of Cambridge submission to the WMT18 news translation tas...
research
01/04/2016

Mutual Information and Diverse Decoding Improve Neural Machine Translation

Sequence-to-sequence neural translation models learn semantic and syntac...
research
06/25/2020

Modeling Baroque Two-Part Counterpoint with Neural Machine Translation

We propose a system for contrapuntal music generation based on a Neural ...
research
10/29/2018

Parallel Attention Mechanisms in Neural Machine Translation

Recent papers in neural machine translation have proposed the strict use...

Please sign up or login with your details

Forgot password? Click here to reset