DeepAI AI Chat
Log In Sign Up

Unsupervised Word Polysemy Quantification with Multiresolution Grids of Contextual Embeddings

by   Christos Xypolopoulos, et al.

The number of senses of a given word, or polysemy, is a very subjective notion, which varies widely across annotators and resources. We propose a novel method to estimate polysemy, based on simple geometry in the contextual embedding space. Our approach is fully unsupervised and purely data-driven. We show through rigorous experiments that our rankings are well correlated (with strong statistical significance) with 6 different rankings derived from famous human-constructed resources such as WordNet, OntoNotes, Oxford, Wikipedia etc., for 6 different standard metrics. We also visualize and analyze the correlation between the human rankings. A valuable by-product of our method is the ability to sample, at no extra cost, sentences containing different senses of a given word. Finally, the fully unsupervised nature of our method makes it applicable to any language. Code and data are publicly available at


page 7

page 9


Bio-inspired Structure Identification in Language Embeddings

Word embeddings are a popular way to improve downstream performances in ...

Neural Compound-Word (Sandhi) Generation and Splitting in Sanskrit Language

This paper describes neural network based approaches to the process of t...

L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources

We present L3Cube-MahaCorpus a Marathi monolingual data set scraped from...

Embarrassingly Simple Unsupervised Aspect Extraction

We present a simple but effective method for aspect identification in se...

Contextual Representation Learning beyond Masked Language Modeling

How do masked language models (MLMs) such as BERT learn contextual repre...

Pareto Probing: Trading Off Accuracy for Complexity

The question of how to probe contextual word representations in a way th...