Log In Sign Up

Does injecting linguistic structure into language models lead to better alignment with brain recordings?

by   Mostafa Abdou, et al.

Neuroscientists evaluate deep neural networks for natural language processing as possible candidate models for how language is processed in the brain. These models are often trained without explicit linguistic supervision, but have been shown to learn some linguistic structure in the absence of such supervision (Manning et al., 2020), potentially questioning the relevance of symbolic linguistic theories in modeling such cognitive processes (Warstadt and Bowman, 2020). We evaluate across two fMRI datasets whether language models align better with brain recordings, if their attention is biased by annotations from syntactic or semantic formalisms. Using structure from dependency or minimal recursion semantic annotations, we find alignments improve significantly for one of the datasets. For another dataset, we see more mixed results. We present an extensive analysis of these results. Our proposed approach enables the evaluation of more targeted hypotheses about the composition of meaning in the brain, expanding the range of possible scientific inferences a neuroscientist could make, and opens up new opportunities for cross-pollination between computational neuroscience and linguistics.


page 6

page 8

page 9

page 17

page 18


Blackbox meets blackbox: Representational Similarity and Stability Analysis of Neural Language Models and Brains

In this paper, we define and apply representational stability analysis (...

Vector Symbolic Architectures answer Jackendoff's challenges for cognitive neuroscience

Jackendoff (2002) posed four challenges that linguistic combinatoriality...

Net2Brain: A Toolbox to compare artificial vision models with human brain responses

We introduce Net2Brain, a graphical and command-line user interface tool...

Infusing Finetuning with Semantic Dependencies

For natural language processing systems, two kinds of evidence support t...

Modeling Task Effects on Meaning Representation in the Brain via Zero-Shot MEG Prediction

How meaning is represented in the brain is still one of the big open que...

Language models and brain alignment: beyond word-level semantics and prediction

Pretrained language models that have been trained to predict the next wo...

Connecting Neural Response measurements Computational Models of language: a non-comprehensive guide

Understanding the neural basis of language comprehension in the brain ha...