Modeling Task Effects on Meaning Representation in the Brain via Zero-Shot MEG Prediction

by   Mariya Toneva, et al.

How meaning is represented in the brain is still one of the big open questions in neuroscience. Does a word (e.g., bird) always have the same representation, or does the task under which the word is processed alter its representation (answering "can you eat it?" versus "can it fly?")? The brain activity of subjects who read the same word while performing different semantic tasks has been shown to differ across tasks. However, it is still not understood how the task itself contributes to this difference. In the current work, we study Magnetoencephalography (MEG) brain recordings of participants tasked with answering questions about concrete nouns. We investigate the effect of the task (i.e. the question being asked) on the processing of the concrete noun by predicting the millisecond-resolution MEG recordings as a function of both the semantics of the noun and the task. Using this approach, we test several hypotheses about the task-stimulus interactions by comparing the zero-shot predictions made by these hypotheses for novel tasks and nouns not seen during training. We find that incorporating the task semantics significantly improves the prediction of MEG recordings, across participants. The improvement occurs 475-550ms after the participants first see the word, which corresponds to what is considered to be the ending time of semantic processing for a word. These results suggest that only the end of semantic processing of a word is task-dependent, and pose a challenge for future research to formulate new hypotheses for earlier task effects as a function of the task and stimuli.


page 4

page 19

page 24


Language models and brain alignment: beyond word-level semantics and prediction

Pretrained language models that have been trained to predict the next wo...

ZEROTOP: Zero-Shot Task-Oriented Semantic Parsing using Large Language Models

We explore the use of large language models (LLMs) for zero-shot semanti...

Does injecting linguistic structure into language models lead to better alignment with brain recordings?

Neuroscientists evaluate deep neural networks for natural language proce...

Cross-view Brain Decoding

How the brain captures the meaning of linguistic stimuli across multiple...

Net2Brain: A Toolbox to compare artificial vision models with human brain responses

We introduce Net2Brain, a graphical and command-line user interface tool...

Code Repositories


Official repository for the paper "Modeling Task Effects on Meaning Representation in the Brain via Zero-Shot MEG Prediction"

view repo

Please sign up or login with your details

Forgot password? Click here to reset