DeepAI AI Chat
Log In Sign Up

Visually Grounded Word Embeddings and Richer Visual Features for Improving Multimodal Neural Machine Translation

by   Jean-Benoit Delbrouck, et al.
University of Mons

In Multimodal Neural Machine Translation (MNMT), a neural model generates a translated sentence that describes an image, given the image itself and one source descriptions in English. This is considered as the multimodal image caption translation task. The images are processed with Convolutional Neural Network (CNN) to extract visual features exploitable by the translation model. So far, the CNNs used are pre-trained on object detection and localization task. We hypothesize that richer architecture, such as dense captioning models, may be more suitable for MNMT and could lead to improved translations. We extend this intuition to the word-embeddings, where we compute both linguistic and visual representation for our corpus vocabulary. We combine and compare different confi


page 3

page 4


UMONS Submission for WMT18 Multimodal Translation Task

This paper describes the UMONS solution for the Multimodal Machine Trans...

LIUM-CVC Submissions for WMT17 Multimodal Translation Task

This paper describes the monomodal and multimodal Neural Machine Transla...

Deeply Supervised Multimodal Attentional Translation Embeddings for Visual Relationship Detection

Detecting visual relationships, i.e. <Subject, Predicate, Object> triple...

Equalizing Gender Biases in Neural Machine Translation with Word Embeddings Techniques

Neural machine translation has significantly pushed forward the quality ...

The MeMAD Submission to the WMT18 Multimodal Translation Task

This paper describes the MeMAD project entry to the WMT Multimodal Machi...

Multimodal Pivots for Image Caption Translation

We present an approach to improve statistical machine translation of ima...

Simultaneous Machine Translation with Visual Context

Simultaneous machine translation (SiMT) aims to translate a continuous i...