Unsupervised Bilingual Lexicon Induction from Mono-lingual Multimodal Data

06/02/2019
by   Shizhe Chen, et al.
0

Bilingual lexicon induction, translating words from the source language to the target language, is a long-standing natural language processing task. Recent endeavors prove that it is promising to employ images as pivot to learn the lexicon induction without reliance on parallel corpora. However, these vision-based approaches simply associate words with entire images, which are constrained to translate concrete words and require object-centered images. We humans can understand words better when they are within a sentence with context. Therefore, in this paper, we propose to utilize images and their associated captions to address the limitations of previous approaches. We propose a multi-lingual caption model trained with different mono-lingual multimodal data to map words in different languages into joint spaces. Two types of word representation are induced from the multi-lingual caption model: linguistic features and localized visual features. The linguistic feature is learned from the sentence contexts with visual semantic constraints, which is beneficial to learn translation for words that are less visual-relevant. The localized visual feature is attended to the region in the image that correlates to the word, so that it alleviates the image restriction for salient visual representation. The two types of features are complementary for word translation. Experimental results on multiple language pairs demonstrate the effectiveness of our proposed method, which substantially outperforms previous vision-based approaches without using any parallel sentences or supervision of seed word pairs.

READ FULL TEXT

page 1

page 5

page 7

research
06/03/2019

From Words to Sentences: A Progressive Learning Approach for Zero-resource Machine Translation with Visual Pivots

The neural machine translation model has suffered from the lack of large...
research
11/09/2017

Learning Multi-Modal Word Representation Grounded in Visual Context

Representing the semantics of words is a long-standing problem for the n...
research
09/18/2017

Limitations of Cross-Lingual Learning from Image Search

Cross-lingual representation learning is an important step in making NLP...
research
10/12/2017

Emergent Translation in Multi-Agent Communication

While most machine translation systems to date are trained on large para...
research
11/24/2021

Cultural and Geographical Influences on Image Translatability of Words across Languages

Neural Machine Translation (NMT) models have been observed to produce po...
research
11/17/2015

Learning Articulated Motion Models from Visual and Lingual Signals

In order for robots to operate effectively in homes and workplaces, they...
research
06/29/2016

Learning Concept Taxonomies from Multi-modal Data

We study the problem of automatically building hypernym taxonomies from ...

Please sign up or login with your details

Forgot password? Click here to reset