LMMS Reloaded: Transformer-based Sense Embeddings for Disambiguation and Beyond

05/26/2021
by   Daniel Loureiro, et al.
0

Distributional semantics based on neural approaches is a cornerstone of Natural Language Processing, with surprising connections to human meaning representation as well. Recent Transformer-based Language Models have proven capable of producing contextual word representations that reliably convey sense-specific information, simply as a product of self-supervision. Prior work has shown that these contextual representations can be used to accurately represent large sense inventories as sense embeddings, to the extent that a distance-based solution to Word Sense Disambiguation (WSD) tasks outperforms models trained specifically for the task. Still, there remains much to understand on how to use these Neural Language Models (NLMs) to produce sense embeddings that can better harness each NLM's meaning representation abilities. In this work we introduce a more principled approach to leverage information from all layers of NLMs, informed by a probing analysis on 14 NLM variants. We also emphasize the versatility of these sense embeddings in contrast to task-specific models, applying them on several sense-related tasks, besides WSD, while demonstrating improved performance using our proposed approach over prior work focused on sense embeddings. Finally, we discuss unexpected findings regarding layer and model performance variations, and potential applications for downstream tasks.

READ FULL TEXT
research
06/24/2019

Language Modelling Makes Sense: Propagating Representations through WordNet for Full-Coverage Word Sense Disambiguation

Contextual embeddings represent a new generation of semantic representat...
research
10/05/2021

A Survey On Neural Word Embeddings

Understanding human language has been a sub-challenge on the way of inte...
research
04/22/2021

Low Anisotropy Sense Retrofitting (LASeR) : Towards Isotropic and Sense Enriched Representations

Contextual word representation models have shown massive improvements on...
research
10/01/2019

Improved Word Sense Disambiguation Using Pre-Trained Contextualized Word Representations

Contextualized word representations are able to give different represent...
research
05/10/2018

From Word to Sense Embeddings: A Survey on Vector Representations of Meaning

Over the past years, distributed representations have proven effective a...
research
11/02/2020

The Devil is in the Details: Evaluating Limitations of Transformer-based Methods for Granular Tasks

Contextual embeddings derived from transformer-based neural language mod...
research
07/08/2022

Getting BART to Ride the Idiomatic Train: Learning to Represent Idiomatic Expressions

Idiomatic expressions (IEs), characterized by their non-compositionality...

Please sign up or login with your details

Forgot password? Click here to reset