UMONS Submission for WMT18 Multimodal Translation Task

10/15/2018
by   Jean-Benoit Delbrouck, et al.
0

This paper describes the UMONS solution for the Multimodal Machine Translation Task presented at the third conference on machine translation (WMT18). We explore a novel architecture, called deepGRU, based on recent findings in the related task of Neural Image Captioning (NIC). The models presented in the following sections lead to the best METEOR translation score for both constrained (English, image) -> German and (English, image) -> French sub-tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/01/2018

LIUM-CVC Submissions for WMT18 Multimodal Translation Task

This paper describes the multimodal Neural Machine Translation systems d...
research
10/07/2017

OSU Multimodal Machine Translation System Report

This paper describes Oregon State University's submissions to the shared...
research
05/07/2018

Multimodal Machine Translation with Reinforcement Learning

Multimodal machine translation is one of the applications that integrate...
research
07/14/2017

CUNI System for the WMT17 Multimodal Translation Task

In this paper, we describe our submissions to the WMT17 Multimodal Trans...
research
07/04/2017

Visually Grounded Word Embeddings and Richer Visual Features for Improving Multimodal Neural Machine Translation

In Multimodal Neural Machine Translation (MNMT), a neural model generate...
research
08/31/2018

Ensemble Sequence Level Training for Multimodal MT: OSU-Baidu WMT18 Multimodal Machine Translation System Report

This paper describes multimodal machine translation systems developed jo...
research
11/28/2019

Multimodal Machine Translation through Visuals and Speech

Multimodal machine translation involves drawing information from more th...

Please sign up or login with your details

Forgot password? Click here to reset