Do language models make human-like predictions about the coreferents of Italian anaphoric zero pronouns?

08/30/2022
by   James A. Michaelov, et al.
0

Some languages allow arguments to be omitted in certain contexts. Yet human language comprehenders reliably infer the intended referents of these zero pronouns, in part because they construct expectations about which referents are more likely. We ask whether Neural Language Models also extract the same expectations. We test whether 12 contemporary language models display expectations that reflect human behavior when exposed to sentences with zero pronouns from five behavioral experiments conducted in Italian by Carminati (2005). We find that three models - XGLM 2.9B, 4.5B, and 7.5B - capture the human behavior from all the experiments, with others successfully modeling some of the results. This result suggests that human expectations about coreference can be derived from exposure to language, and also indicates features of language models that allow them to better reflect human behavior.

READ FULL TEXT
research
11/09/2022

Collateral facilitation in humans and language models

Are the predictions of humans and language models affected by similar th...
research
04/07/2023

Expectations over Unspoken Alternatives Predict Pragmatic Inferences

Scalar inferences (SI) are a signature example of how humans interpret l...
research
12/16/2022

'Rarely' a problem? Language models exhibit inverse scaling in their predictions following 'few'-type quantifiers

Language Models appear to perform poorly on quantification. We ask how b...
research
04/12/2021

Multilingual Language Models Predict Human Reading Behavior

We analyze if large language models are able to predict patterns of huma...
research
06/10/2019

Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations

Deep learning sequence models have led to a marked increase in performan...
research
09/18/2023

Automated Interviewer or Augmented Survey? Collecting Social Data with Large Language Models

Qualitative methods like interviews produce richer data in comparison wi...
research
03/10/2022

Connecting Neural Response measurements Computational Models of language: a non-comprehensive guide

Understanding the neural basis of language comprehension in the brain ha...

Please sign up or login with your details

Forgot password? Click here to reset