Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?

07/20/2021
by   James A. Michaelov, et al.
0

Despite being designed for performance rather than cognitive plausibility, transformer language models have been found to be better at predicting metrics used to assess human language comprehension than language models with other architectures, such as recurrent neural networks. Based on how well they predict the N400, a neural signal associated with processing difficulty, we propose and provide evidence for one possible explanation - their predictions are affected by the preceding context in a way analogous to the effect of semantic facilitation in humans.

READ FULL TEXT

page 3

page 4

05/19/2020

Comparing Transformers and RNNs on predicting human sentence processing data

Recurrent neural networks (RNNs) have long been an architecture of inter...
08/20/2022

Cognitive Modeling of Semantic Fluency Using Transformers

Can deep language models be explanatory models of human cognition? If so...
09/02/2021

So Cloze yet so Far: N400 Amplitude is Better Predicted by Distributional Information than Human Predictability Judgements

More predictable words are easier to process - they are read faster and ...
10/09/2020

How well does surprisal explain N400 amplitude under different experimental conditions?

We investigate the extent to which word surprisal can be used to predict...
04/29/2022

Developmental Negation Processing in Transformer Language Models

Reasoning using negation is known to be difficult for transformer-based ...
05/12/2022

Predicting Human Psychometric Properties Using Computational Language Models

Transformer-based language models (LMs) continue to achieve state-of-the...
06/06/2021

A Targeted Assessment of Incremental Processing in Neural LanguageModels and Humans

We present a targeted, scaled-up comparison of incremental processing in...