Entropy- and Distance-Based Predictors From GPT-2 Attention Patterns Predict Reading Times Over and Above GPT-2 Surprisal

12/21/2022
by   Byung-Doh Oh, et al.
0

Transformer-based large language models are trained to make predictions about the next word by aggregating representations of previous tokens through their self-attention mechanism. In the field of cognitive modeling, such attention patterns have recently been interpreted as embodying the process of cue-based retrieval, in which attention over multiple targets is taken to generate interference and latency during retrieval. Under this framework, this work first defines an entropy-based predictor that quantifies the diffuseness of self-attention, as well as distance-based predictors that capture the incremental change in attention patterns across timesteps. Moreover, following recent studies that question the informativeness of attention weights, we also experiment with alternative methods for incorporating vector norms into attention weights. Regression experiments using predictors calculated from the GPT-2 language model show that these predictors deliver a substantially better fit to held-out self-paced reading and eye-tracking data over a rigorous baseline including GPT-2 surprisal. Additionally, the distance-based predictors generally demonstrated higher predictive power, with effect sizes of up to 6.59 ms per standard deviation on self-paced reading times (compared to 2.82 ms for surprisal) and 1.05 ms per standard deviation on eye-gaze durations (compared to 3.81 ms for surprisal).

READ FULL TEXT

page 1

page 9

research
11/25/2022

On the Effect of Anticipation on Reading Times

Over the past two decades, numerous studies have demonstrated how less p...
research
04/22/2023

Transformer-Based LM Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens

Recent psycholinguistic studies have drawn conflicting conclusions about...
research
09/08/2022

Does Attention Mechanism Possess the Feature of Human Reading? A Perspective of Sentiment Classification Task

[Purpose] To understand the meaning of a sentence, humans can focus on i...
research
12/23/2022

Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?

This work presents a detailed linguistic analysis into why larger Transf...
research
10/20/2020

Individual corpora predict fast memory retrieval during reading

The corpus, from which a predictive language model is trained, can be co...

Please sign up or login with your details

Forgot password? Click here to reset