Modeling structure-building in the brain with CCG parsing and large language models

10/28/2022
by   Miloš Stanojević, et al.
0

To model behavioral and neural correlates of language comprehension in naturalistic environments, researchers have turned to broad-coverage tools from natural-language processing and machine learning. Where syntactic structure is explicitly modeled, prior work has relied predominantly on context-free grammars (CFG), yet such formalisms are not sufficiently expressive for human languages. Combinatory Categorial Grammars (CCGs) are sufficiently expressive directly compositional models of grammar with flexible constituency that affords incremental interpretation. In this work we evaluate whether a more expressive CCG provides a better model than a CFG for human neural signals collected with fMRI while participants listen to an audiobook story. We further test between variants of CCG that differ in how they handle optional adjuncts. These evaluations are carried out against a baseline that includes estimates of next-word predictability from a Transformer neural network language model. Such a comparison reveals unique contributions of CCG structure-building predominantly in the left posterior temporal lobe: CCG-derived measures offer a superior fit to neural signals compared to those derived from a CFG. These effects are spatially distinct from bilateral superior temporal effects that are unique to predictability. Neural effects for structure-building are thus separable from predictability during naturalistic listening, and those effects are best characterized by a grammar whose expressive power is motivated on independent linguistic grounds.

READ FULL TEXT

page 23

page 25

page 27

page 30

page 31

page 32

page 34

research
05/23/2022

Context Limitations Make Neural Language Models More Human-Like

Do modern natural language processing (NLP) models exhibit human-like la...
research
02/23/2023

What makes a language easy to deep-learn?

Neural networks drive the success of natural language processing. A fund...
research
07/30/2021

Structural Guidance for Transformer Language Models

Transformer-based language models pre-trained on large amounts of text d...
research
12/08/2014

Rediscovering the Alphabet - On the Innate Universal Grammar

Universal Grammar (UG) theory has been one of the most important researc...
research
01/29/2021

Does injecting linguistic structure into language models lead to better alignment with brain recordings?

Neuroscientists evaluate deep neural networks for natural language proce...
research
05/17/2023

Using a Large Language Model to Control Speaking Style for Expressive TTS

Appropriate prosody is critical for successful spoken communication. Con...
research
06/11/2018

Finding Syntax in Human Encephalography with Beam Search

Recurrent neural network grammars (RNNGs) are generative models of (tree...

Please sign up or login with your details

Forgot password? Click here to reset