Toward Word Embedding for Personalized Information Retrieval

06/22/2016
by   Nawal Ould-Amer, et al.
0

This paper presents preliminary works on using Word Embedding (word2vec) for query expansion in the context of Personalized Information Retrieval. Traditionally, word embeddings are learned on a general corpus, like Wikipedia. In this work we try to personalize the word embeddings learning, by achieving the learning on the user's profile. The word embeddings are then in the same context than the user interests. Our proposal is evaluated on the CLEF Social Book Search 2016 collection. The results obtained show that some efforts should be made in the way to apply Word Embedding in the context of Personalized Information Retrieval.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2016

Query Expansion with Locally-Trained Word Embeddings

Continuous space word embeddings have received a great deal of attention...
research
09/04/2019

Affect Enriched Word Embeddings for News Information Retrieval

Distributed representations of words have shown to be useful to improve ...
research
01/14/2016

Linear Algebraic Structure of Word Senses, with Applications to Polysemy

Word embeddings are ubiquitous in NLP and information retrieval, but it'...
research
04/14/2020

Extending Text Informativeness Measures to Passage Interestingness Evaluation (Language Model vs. Word Embedding)

Standard informativeness measures used to evaluate Automatic Text Summar...
research
01/08/2019

Deconstructing Word Embeddings

A review of Word Embedding Models through a deconstructive approach reve...
research
06/10/2015

Unveiling the Dreams of Word Embeddings: Towards Language-Driven Image Generation

We introduce language-driven image generation, the task of generating an...
research
01/13/2020

On the Replicability of Combining Word Embeddings and Retrieval Models

We replicate recent experiments attempting to demonstrate an attractive ...

Please sign up or login with your details

Forgot password? Click here to reset