Language Modelling Makes Sense: Propagating Representations through WordNet for Full-Coverage Word Sense Disambiguation

06/24/2019
by   Daniel Loureiro, et al.
0

Contextual embeddings represent a new generation of semantic representations learned from Neural Language Modelling (NLM) that addresses the issue of meaning conflation hampering traditional word embeddings. In this work, we show that contextual embeddings can be used to achieve unprecedented gains in Word Sense Disambiguation (WSD) tasks. Our approach focuses on creating sense-level embeddings with full-coverage of WordNet, and without recourse to explicit knowledge of sense distributions or task-specific modelling. As a result, a simple Nearest Neighbors (k-NN) method using our representations is able to consistently surpass the performance of previous systems using powerful neural sequencing models. We also analyse the robustness of our approach when ignoring part-of-speech and lemma features, requiring disambiguation against the full sense inventory, and revealing shortcomings to be improved. Finally, we explore applications of our sense embeddings for concept-level analyses of contextual embeddings and their respective NLMs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2021

A Survey On Neural Word Embeddings

Understanding human language has been a sub-challenge on the way of inte...
research
05/26/2021

LMMS Reloaded: Transformer-based Sense Embeddings for Disambiguation and Beyond

Distributional semantics based on neural approaches is a cornerstone of ...
research
04/15/2017

MUSE: Modularizing Unsupervised Sense Embeddings

This paper proposes to address the word sense ambiguity issue in an unsu...
research
06/24/2019

LIAAD at SemDeep-5 Challenge: Word-in-Context (WiC)

This paper describes the LIAAD system that was ranked second place in th...
research
05/10/2018

From Word to Sense Embeddings: A Survey on Vector Representations of Meaning

Over the past years, distributed representations have proven effective a...
research
12/10/2020

Multi-Sense Language Modelling

The effectiveness of a language model is influenced by its token represe...
research
01/25/2021

PolyLM: Learning about Polysemy through Language Modeling

To avoid the "meaning conflation deficiency" of word embeddings, a numbe...

Please sign up or login with your details

Forgot password? Click here to reset