Learning Word Sense Embeddings from Word Sense Definitions

06/15/2016
by   Qi Li, et al.
0

Word embeddings play a significant role in many modern NLP systems. Since learning one representation per word is problematic for polysemous words and homonymous words, researchers propose to use one embedding per word sense. Their approaches mainly train word sense embeddings on a corpus. In this paper, we propose to use word sense definitions to learn one embedding per word sense. Experimental results on word similarity tasks and a word sense disambiguation task show that word sense embeddings produced by our approach are of high quality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2017

Making Sense of Word Embeddings

We present a simple yet effective approach for learning word sense embed...
research
07/24/2017

Improve Lexicon-based Word Embeddings By Word Sense Disambiguation

There have been some works that learn a lexicon together with the corpus...
research
09/18/2019

Using BERT for Word Sense Disambiguation

Word Sense Disambiguation (WSD), which aims to identify the correct sens...
research
07/06/2017

A Simple Approach to Learn Polysemous Word Embeddings

Many NLP applications require disambiguating polysemous words. Existing ...
research
05/27/2022

Semeval-2022 Task 1: CODWOE – Comparing Dictionaries and Word Embeddings

Word embeddings have advanced the state of the art in NLP across numerou...
research
09/26/2017

Polysemy Detection in Distributed Representation of Word Sense

In this paper, we propose a statistical test to determine whether a give...
research
01/14/2016

Linear Algebraic Structure of Word Senses, with Applications to Polysemy

Word embeddings are ubiquitous in NLP and information retrieval, but it'...

Please sign up or login with your details

Forgot password? Click here to reset