Some of Them Can be Guessed! Exploring the Effect of Linguistic Context in Predicting Quantifiers

06/01/2018
by   Sandro Pezzelle, et al.
0

We study the role of linguistic context in predicting quantifiers (`few', `all'). We collect crowdsourced data from human participants and test various models in a local (single-sentence) and a global context (multi-sentence) condition. Models significantly out-perform humans in the former setting and are only slightly better in the latter. While human performance improves with more linguistic context (especially on proportional quantifiers), model performance suffers. Models are very effective in exploiting lexical and morpho-syntactic patterns; humans are better at genuinely understanding the meaning of the (global) context.

READ FULL TEXT

page 4

page 5

research
08/29/2018

A Neural Model of Adaptation in Reading

It has been argued that humans rapidly adapt their lexical and syntactic...
research
12/18/2022

Language model acceptability judgements are not always robust to context

Targeted syntactic evaluations of language models ask whether models sho...
research
02/09/2021

Decontextualization: Making Sentences Stand-Alone

Models for question answering, dialogue agents, and summarization often ...
research
06/19/2020

Exploring Processing of Nested Dependencies in Neural-Network Language Models and Humans

Recursive processing in sentence comprehension is considered a hallmark ...
research
05/26/2023

Large Language Models Are Partially Primed in Pronoun Interpretation

While a large body of literature suggests that large language models (LL...
research
03/20/2023

Multimodal Shannon Game with Images

The Shannon game has long been used as a thought experiment in linguisti...
research
06/08/2021

Swords: A Benchmark for Lexical Substitution with Improved Data Coverage and Quality

We release a new benchmark for lexical substitution, the task of finding...

Please sign up or login with your details

Forgot password? Click here to reset