Exploration on Grounded Word Embedding: Matching Words and Images with Image-Enhanced Skip-Gram Model

09/08/2018
by   Ruixuan Luo, et al.
0

Word embedding is designed to represent the semantic meaning of a word with low dimensional vectors. The state-of-the-art methods of learning word embeddings (word2vec and GloVe) only use the word co-occurrence information. The learned embeddings are real number vectors, which are obscure to human. In this paper, we propose an Image-Enhanced Skip-Gram Model to learn grounded word embeddings by representing the word vectors in the same hyper-plane with image vectors. Experiments show that the image vectors and word embeddings learned by our model are highly correlated, which indicates that our model is able to provide a vivid image-based explanation to the word embeddings.

READ FULL TEXT
research
10/26/2020

Robust and Consistent Estimation of Word Embedding for Bangla Language by fine-tuning Word2Vec Model

Word embedding or vector representation of word holds syntactical and se...
research
07/27/2017

Analysis of Italian Word Embeddings

In this work we analyze the performances of two of the most used word em...
research
10/27/2022

MorphTE: Injecting Morphology in Tensorized Embeddings

In the era of deep learning, word embeddings are essential when dealing ...
research
02/27/2017

Dynamic Word Embeddings

We present a probabilistic language model for time-stamped text data whi...
research
12/19/2022

Norm of word embedding encodes information gain

Distributed representations of words encode lexical semantic information...
research
01/01/2018

Beyond Word Embeddings: Learning Entity and Concept Representations from Large Scale Knowledge Bases

Text representation using neural word embeddings has proven efficacy in ...
research
09/29/2017

Synonym Discovery with Etymology-based Word Embeddings

We propose a novel approach to learn word embeddings based on an extende...

Please sign up or login with your details

Forgot password? Click here to reset