Making Sense of Word Embeddings

08/10/2017
by   Maria Pelevina, et al.
0

We present a simple yet effective approach for learning word sense embeddings. In contrast to existing techniques, which either directly learn sense representations from corpora or rely on sense inventories from lexical resources, our approach can induce a sense inventory from existing word embeddings via clustering of ego-networks of related words. An integrated WSD mechanism enables labeling of words in context with learned sense vectors, which gives rise to downstream applications. Experiments show that the performance of our method is comparable to state-of-the-art unsupervised WSD systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2021

Learning Sense-Specific Static Embeddings using Contextualised Word Embeddings as a Proxy

Contextualised word embeddings generated from Neural Language Models (NL...
research
06/15/2016

Learning Word Sense Embeddings from Word Sense Definitions

Word embeddings play a significant role in many modern NLP systems. Sinc...
research
07/30/2019

SenseFitting: Sense Level Semantic Specialization of Word Embeddings for Word Sense Disambiguation

We introduce a neural network-based system of Word Sense Disambiguation ...
research
03/22/2018

Word sense induction using word embeddings and community detection in complex networks

Word Sense Induction (WSI) is the ability to automatically induce word s...
research
08/20/2022

Lost in Context? On the Sense-wise Variance of Contextualized Word Embeddings

Contextualized word embeddings in language models have given much advanc...
research
04/22/2018

Inducing and Embedding Senses with Scaled Gumbel Softmax

Methods for learning word sense embeddings represent a single word with ...
research
03/30/2016

Bilingual Learning of Multi-sense Embeddings with Discrete Autoencoders

We present an approach to learning multi-sense word embeddings relying b...

Please sign up or login with your details

Forgot password? Click here to reset