A Mixture Model for Learning Multi-Sense Word Embeddings

06/15/2017
by   Dai Quoc Nguyen, et al.
0

Word embeddings are now a standard technique for inducing meaning representations for words. For getting good representations, it is important to take into account different senses of a word. In this paper, we propose a mixture model for learning multi-sense word embeddings. Our model generalizes the previous works in that it allows to induce different weights of different senses of a word. The experimental results show that our model outperforms previous models on standard evaluation tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/06/2017

Real Multi-Sense or Pseudo Multi-Sense: An Approach to Improve Word Representation

Previous researches have shown that learning multiple representations fo...
research
06/07/2018

Probabilistic FastText for Multi-Sense Word Embeddings

We introduce Probabilistic FastText, a new model for word embeddings tha...
research
01/13/2020

On the Replicability of Combining Word Embeddings and Retrieval Models

We replicate recent experiments attempting to demonstrate an attractive ...
research
12/30/2019

Multiview Representation Learning for a Union of Subspaces

Canonical correlation analysis (CCA) is a popular technique for learning...
research
03/30/2016

Bilingual Learning of Multi-sense Embeddings with Discrete Autoencoders

We present an approach to learning multi-sense word embeddings relying b...
research
02/25/2020

Semantic Relatedness for Keyword Disambiguation: Exploiting Different Embeddings

Understanding the meaning of words is crucial for many tasks that involv...
research
02/24/2017

Use Generalized Representations, But Do Not Forget Surface Features

Only a year ago, all state-of-the-art coreference resolvers were using a...

Please sign up or login with your details

Forgot password? Click here to reset