Incorporating Word Sense Disambiguation in Neural Language Models

06/15/2021
by   Jan Philip Wahle, et al.
0

We present two supervised (pre-)training methods to incorporate gloss definitions from lexical resources into neural language models (LMs). The training improves our models' performance for Word Sense Disambiguation (WSD) but also benefits general language understanding tasks while adding almost no parameters. We evaluate our techniques with seven different neural LMs and find that XLNet is more suitable for WSD than BERT. Our best-performing methods exceeds state-of-the-art WSD techniques on the SemCor 3.0 dataset by 0.5 and increase BERT's performance on the GLUE benchmark by 1.1

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/15/2019

SenseBERT: Driving Some Sense into BERT

Self-supervision techniques have allowed neural language models to advan...
research
01/12/2023

A Cohesive Distillation Architecture for Neural Language Models

A recent trend in Natural Language Processing is the exponential growth ...
research
08/26/2020

Language Models and Word Sense Disambiguation: An Overview and Analysis

Transformer-based language models have taken many fields in NLP by storm...
research
09/25/2020

RecoBERT: A Catalog Language Model for Text-Based Recommendations

Language models that utilize extensive self-supervised pre-training from...
research
05/09/2022

Improving negation detection with negation-focused pre-training

Negation is a common linguistic feature that is crucial in many language...
research
03/15/2022

Evaluating BERT-based Pre-training Language Models for Detecting Misinformation

It is challenging to control the quality of online information due to th...
research
04/30/2020

WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context

In this paper, we present WiC-TSV (Target Sense Verification for Words i...

Please sign up or login with your details

Forgot password? Click here to reset