Understanding language-elicited EEG data by predicting it from a fine-tuned language model

04/02/2019
by   Dan Schwartz, et al.
0

Electroencephalography (EEG) recordings of brain activity taken while participants read or listen to language are widely used within the cognitive neuroscience and psycholinguistics communities as a tool to study language comprehension. Several time-locked stereotyped EEG responses to word-presentations -- known collectively as event-related potentials (ERPs) -- are thought to be markers for semantic or syntactic processes that take place during comprehension. However, the characterization of each individual ERP in terms of what features of a stream of language trigger the response remains controversial. Improving this characterization would make ERPs a more useful tool for studying language comprehension. We take a step towards better understanding the ERPs by fine-tuning a language model to predict them. This new approach to analysis shows for the first time that all of the ERPs are predictable from embeddings of a stream of language. Prior work has only found two of the ERPs to be predictable. In addition to this analysis, we examine which ERPs benefit from sharing parameters during joint training. We find that two pairs of ERPs previously identified in the literature as being related to each other benefit from joint training, while several other pairs of ERPs that benefit from joint training are suggestive of potential relationships. Extensions of this analysis that further examine what kinds of information in the model embeddings relate to each ERP have the potential to elucidate the processes involved in human language comprehension.

READ FULL TEXT

page 7

page 13

page 14

research
03/29/2021

Retrieving Event-related Human Brain Dynamics from Natural Sentence Reading

Electroencephalography (EEG) signals recordings when people reading natu...
research
10/29/2019

Inducing brain-relevant bias in natural language processing models

Progress in natural language processing (NLP) models that estimate repre...
research
08/02/2023

Evaluating Instruction-Tuned Large Language Models on Code Comprehension and Generation

In this work, we evaluate 10 open-source instructed LLMs on four represe...
research
08/03/2021

Understanding Human Reading Comprehension with Brain Signals

Reading comprehension is a complex cognitive process involving many huma...
research
10/31/2018

Improving Machine Reading Comprehension with General Reading Strategies

Reading strategies have been shown to improve comprehension levels, espe...
research
08/01/2023

ViT2EEG: Leveraging Hybrid Pretrained Vision Transformers for EEG Data

In this study, we demonstrate the application of a hybrid Vision Transfo...

Please sign up or login with your details

Forgot password? Click here to reset