Language Models Explain Word Reading Times Better Than Empirical Predictability

02/02/2022
by   Markus J. Hofmann, et al.
0

Though there is a strong consensus that word length and frequency are the most important single-word features determining visual-orthographic access to the mental lexicon, there is less agreement as how to best capture syntactic and semantic factors. The traditional approach in cognitive reading research assumes that word predictability from sentence context is best captured by cloze completion probability (CCP) derived from human performance data. We review recent research suggesting that probabilistic language models provide deeper explanations for syntactic and semantic effects than CCP. Then we compare CCP with (1) Symbolic n-gram models consolidate syntactic and semantic short-range relations by computing the probability of a word to occur, given two preceding words. (2) Topic models rely on subsymbolic representations to capture long-range semantic similarity by word co-occurrence counts in documents. (3) In recurrent neural networks (RNNs), the subsymbolic units are trained to predict the next word, given all preceding words in the sentences. To examine lexical retrieval, these models were used to predict single fixation durations and gaze durations to capture rapidly successful and standard lexical access, and total viewing time to capture late semantic integration. The linear item-level analyses showed greater correlations of all language models with all eye-movement measures than CCP. Then we examined non-linear relations between the different types of predictability and the reading times using generalized additive models. N-gram and RNN probabilities of the present word more consistently predicted reading performance compared with topic models or CCP.

READ FULL TEXT

page 12

page 13

page 14

research
05/19/2020

Comparing Transformers and RNNs on predicting human sentence processing data

Recurrent neural networks (RNNs) have long been an architecture of inter...
research
11/07/2015

The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations

We introduce a new test of how well language models capture meaning in c...
research
06/02/2020

On the Predictive Power of Neural Language Models for Human Real-Time Comprehension Behavior

Human reading behavior is tuned to the statistics of natural language: t...
research
06/22/2018

Evaluating language models of tonal harmony

This study borrows and extends probabilistic language models from natura...
research
10/20/2020

Individual corpora predict fast memory retrieval during reading

The corpus, from which a predictive language model is trained, can be co...
research
12/06/2019

Decomposing predictability: Semantic feature overlap between words and the dynamics of reading for meaning

The present study uses a computational approach to examine the role of s...

Please sign up or login with your details

Forgot password? Click here to reset