Information-Restricted Neural Language Models Reveal Different Brain Regions' Sensitivity to Semantics, Syntax and Context

02/28/2023
by   Alexandre Pasquiou, et al.
6

A fundamental question in neurolinguistics concerns the brain regions involved in syntactic and semantic processing during speech comprehension, both at the lexical (word processing) and supra-lexical levels (sentence and discourse processing). To what extent are these regions separated or intertwined? To address this question, we trained a lexical language model, Glove, and a supra-lexical language model, GPT-2, on a text corpus from which we selectively removed either syntactic or semantic information. We then assessed to what extent these information-restricted models were able to predict the time-courses of fMRI signal of humans listening to naturalistic text. We also manipulated the size of contextual information provided to GPT-2 in order to determine the windows of integration of brain regions involved in supra-lexical processing. Our analyses show that, while most brain regions involved in language are sensitive to both syntactic and semantic variables, the relative magnitudes of these effects vary a lot across these regions. Furthermore, we found an asymmetry between the left and right hemispheres, with semantic and syntactic processing being more dissociated in the left hemisphere than in the right, and the left and right hemispheres showing respectively greater sensitivity to short and long contexts. The use of information-restricted NLP models thus shed new light on the spatial organization of syntactic processing, semantic processing and compositionality.

READ FULL TEXT

page 6

page 9

page 10

page 11

page 26

page 29

page 34

page 36

research
05/03/2022

Neural Language Taskonomy: Which NLP Tasks are the most Predictive of fMRI Brain Activity?

Several popular Transformer based language models have been found to be ...
research
06/18/2017

Lexical representation explains cortical entrainment during speech comprehension

Results from a recent neuroimaging study on spoken sentence comprehensio...
research
06/06/2021

A Targeted Assessment of Incremental Processing in Neural LanguageModels and Humans

We present a targeted, scaled-up comparison of incremental processing in...
research
03/02/2021

Decomposing lexical and compositional syntax and semantics with deep language models

The activations of language transformers like GPT2 have been shown to li...
research
03/02/2018

Syntax-Aware Language Modeling with Recurrent Neural Networks

Neural language models (LMs) are typically trained using only lexical fe...
research
08/29/2018

A Neural Model of Adaptation in Reading

It has been argued that humans rapidly adapt their lexical and syntactic...
research
07/06/2023

Agentività e telicità in GilBERTo: implicazioni cognitive

The goal of this study is to investigate whether a Transformer-based neu...

Please sign up or login with your details

Forgot password? Click here to reset