Transfer Learning and Augmentation for Word Sense Disambiguation

01/10/2021
by   Harsh Kohli, et al.
0

Many downstream NLP tasks have shown significant improvement through continual pre-training, transfer learning and multi-task learning. State-of-the-art approaches in Word Sense Disambiguation today benefit from some of these approaches in conjunction with information sources such as semantic relationships and gloss definitions contained within WordNet. Our work builds upon these systems and uses data augmentation along with extensive pre-training on various different NLP tasks and datasets. Our transfer learning and augmentation pipeline achieves state-of-the-art single model performance in WSD and is at par with the best ensemble results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/28/2019

Data Augmentation for Deep Transfer Learning

Current approaches to deep learning are beginning to rely heavily on tra...
research
08/03/2023

Curricular Transfer Learning for Sentence Encoded Tasks

Fine-tuning language models in a downstream task is the standard approac...
research
10/03/2022

Characterization of effects of transfer learning across domains and languages

With ever-expanding datasets of domains, tasks and languages, transfer l...
research
05/21/2021

Training Bi-Encoders for Word Sense Disambiguation

Modern transformer-based neural architectures yield impressive results i...
research
10/18/2017

Towards a Seamless Integration of Word Senses into Downstream NLP Applications

Lexical ambiguity can impede NLP systems from accurate understanding of ...
research
09/10/2021

Investigating Numeracy Learning Ability of a Text-to-Text Transfer Model

The transformer-based pre-trained language models have been tremendously...
research
12/14/2022

SMSMix: Sense-Maintained Sentence Mixup for Word Sense Disambiguation

Word Sense Disambiguation (WSD) is an NLP task aimed at determining the ...

Please sign up or login with your details

Forgot password? Click here to reset