Context-gloss Augmentation for Improving Word Sense Disambiguation

by   Guan-Ting Lin, et al.

The goal of Word Sense Disambiguation (WSD) is to identify the sense of a polysemous word in a specific context. Deep-learning techniques using BERT have achieved very promising results in the field and different methods have been proposed to integrate structured knowledge to enhance performance. At the same time, an increasing number of data augmentation techniques have been proven to be useful for NLP tasks. Building upon previous works leveraging BERT and WordNet knowledge, we explore different data augmentation techniques on context-gloss pairs to improve the performance of WSD. In our experiment, we show that both sentence-level and word-level augmentation methods are effective strategies for WSD. Also, we find out that performance can be improved by adding hypernyms' glosses obtained from a lexical knowledge base. We compare and analyze different context-gloss augmentation techniques, and the results show that applying back translation on gloss performs the best.



There are no comments yet.


page 1

page 2

page 3

page 4


Adapting BERT for Word Sense Disambiguation with Gloss Selection Objective and Example Sentences

Domain adaptation or transfer learning using pre-trained language models...

AdMix: A Mixed Sample Data Augmentation Method for Neural Machine Translation

In Neural Machine Translation (NMT), data augmentation methods such as b...

BET: A Backtranslation Approach for Easy Data Augmentation in Transformer-based Paraphrase Identification Context

Newly-introduced deep learning architectures, namely BERT, XLNet, RoBERT...

Improving Non-native Word-level Pronunciation Scoring with Phone-level Mixup Data Augmentation and Multi-source Information

Deep learning-based pronunciation scoring models highly rely on the avai...

Human Vocal Sentiment Analysis

In this paper, we use several techniques with conventional vocal feature...

A Comparative Study of Lexical Substitution Approaches based on Neural Language Models

Lexical substitution in context is an extremely powerful technology that...

Transfer Learning and Augmentation for Word Sense Disambiguation

Many downstream NLP tasks have shown significant improvement through con...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.