FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the Dictionary

02/16/2021
by   Terra Blevins, et al.
0

Current models for Word Sense Disambiguation (WSD) struggle to disambiguate rare senses, despite reaching human performance on global WSD metrics. This stems from a lack of data for both modeling and evaluating rare senses in existing WSD datasets. In this paper, we introduce FEWS (Few-shot Examples of Word Senses), a new low-shot WSD dataset automatically extracted from example sentences in Wiktionary. FEWS has high sense coverage across different natural language domains and provides: (1) a large training set that covers many more senses than previous datasets and (2) a comprehensive evaluation set containing few- and zero-shot examples of a wide variety of senses. We establish baselines on FEWS with knowledge-based and neural WSD approaches and present transfer learning experiments demonstrating that models additionally trained with FEWS better capture rare senses in existing WSD datasets. Finally, we find humans outperform the best baseline models on FEWS, indicating that FEWS will support significant future work on low-shot WSD.

READ FULL TEXT
research
04/26/2023

Translate to Disambiguate: Zero-shot Multilingual Word Sense Disambiguation with Pretrained Language Models

Pretrained Language Models (PLMs) learn rich cross-lingual knowledge and...
research
11/13/2022

Large-Scale Bidirectional Training for Zero-Shot Image Captioning

When trained on large-scale datasets, image captioning models can unders...
research
05/29/2023

LaFTer: Label-Free Tuning of Zero-shot Classifier using Language and Unlabeled Image Collections

Recently, large-scale pre-trained Vision and Language (VL) models have s...
research
10/04/2020

An Empirical Study on Large-Scale Multi-Label Text Classification Including Few and Zero-Shot Labels

Large-scale Multi-label Text Classification (LMTC) has a wide range of N...
research
03/04/2021

A Systematic Evaluation of Transfer Learning and Pseudo-labeling with BERT-based Ranking Models

Due to high annotation costs, making the best use of existing human-crea...
research
05/06/2020

Moving Down the Long Tail of Word Sense Disambiguation with Gloss-Informed Biencoders

A major obstacle in Word Sense Disambiguation (WSD) is that word senses ...
research
12/12/2021

Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations

Recently, there has been an increasing interest in models that generate ...

Please sign up or login with your details

Forgot password? Click here to reset