Modeling Task Effects on Meaning Representation in the Brain via Zero-Shot MEG Prediction

09/17/2020
by   Mariya Toneva, et al.
5

How meaning is represented in the brain is still one of the big open questions in neuroscience. Does a word (e.g., bird) always have the same representation, or does the task under which the word is processed alter its representation (answering "can you eat it?" versus "can it fly?")? The brain activity of subjects who read the same word while performing different semantic tasks has been shown to differ across tasks. However, it is still not understood how the task itself contributes to this difference. In the current work, we study Magnetoencephalography (MEG) brain recordings of participants tasked with answering questions about concrete nouns. We investigate the effect of the task (i.e. the question being asked) on the processing of the concrete noun by predicting the millisecond-resolution MEG recordings as a function of both the semantics of the noun and the task. Using this approach, we test several hypotheses about the task-stimulus interactions by comparing the zero-shot predictions made by these hypotheses for novel tasks and nouns not seen during training. We find that incorporating the task semantics significantly improves the prediction of MEG recordings, across participants. The improvement occurs 475-550ms after the participants first see the word, which corresponds to what is considered to be the ending time of semantic processing for a word. These results suggest that only the end of semantic processing of a word is task-dependent, and pose a challenge for future research to formulate new hypotheses for earlier task effects as a function of the task and stimuli.

READ FULL TEXT

page 4

page 19

page 24

research
12/01/2022

Language models and brain alignment: beyond word-level semantics and prediction

Pretrained language models that have been trained to predict the next wo...
research
12/21/2022

ZEROTOP: Zero-Shot Task-Oriented Semantic Parsing using Large Language Models

We explore the use of large language models (LLMs) for zero-shot semanti...
research
10/14/2017

Shared High Value Research Resources: The CamCAN Human Lifespan Neuroimaging Dataset Processed on the Open Science Grid

The CamCAN Lifespan Neuroimaging Dataset, Cambridge (UK) Centre for Agei...
research
01/29/2021

Does injecting linguistic structure into language models lead to better alignment with brain recordings?

Neuroscientists evaluate deep neural networks for natural language proce...
research
04/18/2022

Cross-view Brain Decoding

How the brain captures the meaning of linguistic stimuli across multiple...
research
08/20/2022

Net2Brain: A Toolbox to compare artificial vision models with human brain responses

We introduce Net2Brain, a graphical and command-line user interface tool...

Please sign up or login with your details

Forgot password? Click here to reset