Using BERT for Word Sense Disambiguation

09/18/2019
by   Jiaju Du, et al.
0

Word Sense Disambiguation (WSD), which aims to identify the correct sense of a given polyseme, is a long-standing problem in NLP. In this paper, we propose to use BERT to extract better polyseme representations for WSD and explore several ways of combining BERT and the classifier. We also utilize sense definitions to train a unified classifier for all words, which enables the model to disambiguate unseen polysemes. Experiments show that our model achieves the state-of-the-art results on the standard English All-word WSD evaluation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2016

Learning Word Sense Embeddings from Word Sense Definitions

Word embeddings play a significant role in many modern NLP systems. Sinc...
research
11/27/2021

Tapping BERT for Preposition Sense Disambiguation

Prepositions are frequently occurring polysemous words. Disambiguation o...
research
09/20/2021

BERT Has Uncommon Sense: Similarity Ranking for Word Sense BERTology

An important question concerning contextualized word embedding (CWE) mod...
research
12/16/2022

Metaphorical Polysemy Detection: Conventional Metaphor meets Word Sense Disambiguation

Linguists distinguish between novel and conventional metaphor, a distinc...
research
09/24/2020

Adapting BERT for Word Sense Disambiguation with Gloss Selection Objective and Example Sentences

Domain adaptation or transfer learning using pre-trained language models...
research
09/04/2018

A Novel Neural Sequence Model with Multiple Attentions for Word Sense Disambiguation

Word sense disambiguation (WSD) is a well researched problem in computat...
research
09/23/2021

Putting Words in BERT's Mouth: Navigating Contextualized Vector Spaces with Pseudowords

We present a method for exploring regions around individual points in a ...

Please sign up or login with your details

Forgot password? Click here to reset