Swords: A Benchmark for Lexical Substitution with Improved Data Coverage and Quality

06/08/2021
by   Mina Lee, et al.
0

We release a new benchmark for lexical substitution, the task of finding appropriate substitutes for a target word in a context. To assist humans with writing, lexical substitution systems can suggest words that humans cannot easily think of. However, existing benchmarks depend on human recall as the only source of data, and therefore lack coverage of the substitutes that would be most helpful to humans. Furthermore, annotators often provide substitutes of low quality, which are not actually appropriate in the given context. We collect higher-coverage and higher-quality data by framing lexical substitution as a classification problem, guided by the intuition that it is easier for humans to judge the appropriateness of candidate substitutes than conjure them from memory. To this end, we use a context-free thesaurus to produce candidates and rely on human judgement to determine contextual appropriateness. Compared to the previous largest benchmark, our Swords benchmark has 4.1x more substitutes per target word for the same level of quality, and its substitutes are 1.5x more appropriate (based on human judgement) for the same number of substitutes.

READ FULL TEXT

page 6

page 14

research
06/25/2020

LSBert: A Simple Framework for Lexical Simplification

Lexical simplification (LS) aims to replace complex words in a given sen...
research
07/11/2021

LexSubCon: Integrating Knowledge from Lexical Resources into Contextual Embeddings for Lexical Substitution

Lexical substitution is the task of generating meaningful substitutes fo...
research
12/13/2021

Context vs Target Word: Quantifying Biases in Lexical Semantic Datasets

State-of-the-art contextualized models such as BERT use tasks such as Wi...
research
10/12/2018

A Word-Complexity Lexicon and A Neural Readability Ranking Model for Lexical Simplification

Current lexical simplification approaches rely heavily on heuristics and...
research
11/13/2019

Word-level Lexical Normalisation using Context-Dependent Embeddings

Lexical normalisation (LN) is the process of correcting each word in a d...
research
06/01/2018

Some of Them Can be Guessed! Exploring the Effect of Linguistic Context in Predicting Quantifiers

We study the role of linguistic context in predicting quantifiers (`few'...

Please sign up or login with your details

Forgot password? Click here to reset