Neural Language Models are not Born Equal to Fit Brain Data, but Training Helps

07/07/2022
by   Alexandre Pasquiou, et al.
2

Neural Language Models (NLMs) have made tremendous advances during the last years, achieving impressive performance on various linguistic tasks. Capitalizing on this, studies in neuroscience have started to use NLMs to study neural activity in the human brain during language processing. However, many questions remain unanswered regarding which factors determine the ability of a neural language model to capture brain activity (aka its 'brain score'). Here, we make first steps in this direction and examine the impact of test loss, training corpus and model architecture (comparing GloVe, LSTM, GPT-2 and BERT), on the prediction of functional Magnetic Resonance Imaging timecourses of participants listening to an audiobook. We find that (1) untrained versions of each model already explain significant amount of signal in the brain by capturing similarity in brain responses across identical words, with the untrained LSTM outperforming the transformerbased models, being less impacted by the effect of context; (2) that training NLP models improves brain scores in the same brain regions irrespective of the model's architecture; (3) that Perplexity (test loss) is not a good predictor of brain score; (4) that training data have a strong influence on the outcome and, notably, that off-the-shelf models may lack statistical power to detect brain activations. Overall, we outline the impact of modeltraining choices, and suggest good practices for future studies aiming at explaining the human language system using neural language models.

READ FULL TEXT

page 5

page 6

page 8

page 12

page 13

page 14

page 17

research
10/29/2019

Inducing brain-relevant bias in natural language processing models

Progress in natural language processing (NLP) models that estimate repre...
research
03/27/2023

Coupling Artificial Neurons in BERT and Biological Neurons in the Human Brain

Linking computational natural language processing (NLP) models and neura...
research
10/12/2021

Model-based analysis of brain activity reveals the hierarchy of language in 305 subjects

A popular approach to decompose the neural bases of language consists in...
research
03/02/2021

Decomposing lexical and compositional syntax and semantics with deep language models

The activations of language transformers like GPT2 have been shown to li...
research
06/04/2019

Blackbox meets blackbox: Representational Similarity and Stability Analysis of Neural Language Models and Brains

In this paper, we define and apply representational stability analysis (...
research
06/06/2023

Identifying Shared Decodable Concepts in the Human Brain Using Image-Language Foundation Models

We introduce a method that takes advantage of high-quality pretrained mu...
research
05/12/2022

Predicting Human Psychometric Properties Using Computational Language Models

Transformer-based language models (LMs) continue to achieve state-of-the...

Please sign up or login with your details

Forgot password? Click here to reset