Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?

07/20/2021
by   James A. Michaelov, et al.
0

Despite being designed for performance rather than cognitive plausibility, transformer language models have been found to be better at predicting metrics used to assess human language comprehension than language models with other architectures, such as recurrent neural networks. Based on how well they predict the N400, a neural signal associated with processing difficulty, we propose and provide evidence for one possible explanation - their predictions are affected by the preceding context in a way analogous to the effect of semantic facilitation in humans.

READ FULL TEXT

page 3

page 4

research
05/19/2020

Comparing Transformers and RNNs on predicting human sentence processing data

Recurrent neural networks (RNNs) have long been an architecture of inter...
research
08/20/2022

Cognitive Modeling of Semantic Fluency Using Transformers

Can deep language models be explanatory models of human cognition? If so...
research
07/14/2023

Are words equally surprising in audio and audio-visual comprehension?

We report a controlled study investigating the effect of visual informat...
research
10/09/2020

How well does surprisal explain N400 amplitude under different experimental conditions?

We investigate the extent to which word surprisal can be used to predict...
research
10/22/2022

A Comprehensive Comparison of Neural Networks as Cognitive Models of Inflection

Neural networks have long been at the center of a debate around the cogn...
research
08/15/2023

Using Artificial Populations to Study Psychological Phenomena in Neural Models

The recent proliferation of research into transformer based natural lang...
research
05/17/2023

Predicting Side Effect of Drug Molecules using Recurrent Neural Networks

Identification and verification of molecular properties such as side eff...

Please sign up or login with your details

Forgot password? Click here to reset