Visual Agreement Regularized Training for Multi-Modal Machine Translation

12/27/2019
by   Pengcheng Yang, et al.
0

Multi-modal machine translation aims at translating the source sentence into a different language in the presence of the paired image. Previous work suggests that additional visual information only provides dispensable help to translation, which is needed in several very special cases such as translating ambiguous words. To make better use of visual information, this work presents visual agreement regularized training. The proposed approach jointly trains the source-to-target and target-to-source translation models and encourages them to share the same focus on the visual information when generating semantically equivalent visual words (e.g. "ball" in English and "ballon" in French). Besides, a simple yet effective multi-head co-attention model is also introduced to capture interactions between visual and textual features. The results show that our approaches can outperform competitive baselines by a large margin on the Multi30k dataset. Further analysis demonstrates that the proposed regularized training can effectively improve the agreement of attention on the image, leading to better use of visual information.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/18/2019

Distilling Translations with Visual Awareness

Previous work on multimodal machine translation has shown that visual in...
research
03/16/2021

Gumbel-Attention for Multi-modal Machine Translation

Multi-modal machine translation (MMT) improves translation quality by in...
research
11/28/2018

Unsupervised Multi-modal Neural Machine Translation

Unsupervised neural machine translation (UNMT) has recently achieved rem...
research
02/04/2017

Doubly-Attentive Decoder for Multi-modal Neural Machine Translation

We introduce a Multi-modal Neural Machine Translation model in which a d...
research
05/02/2022

Hausa Visual Genome: A Dataset for Multi-Modal English to Hausa Machine Translation

Multi-modal Machine Translation (MMT) enables the use of visual informat...
research
07/21/2019

Hindi Visual Genome: A Dataset for Multimodal English-to-Hindi Machine Translation

Visual Genome is a dataset connecting structured image information with ...
research
04/05/2019

Information Aggregation for Multi-Head Attention with Routing-by-Agreement

Multi-head attention is appealing for its ability to jointly extract dif...

Please sign up or login with your details

Forgot password? Click here to reset