DeepAI AI Chat
Log In Sign Up

Syntactic Persistence in Language Models: Priming as a Window into Abstract Language Representations

by   Arabella Sinclair, et al.
University of Amsterdam

We investigate the extent to which modern, neural language models are susceptible to syntactic priming, the phenomenon where the syntactic structure of a sentence makes the same structure more probable in a follow-up sentence. We explore how priming can be used to study the nature of the syntactic knowledge acquired by these models. We introduce a novel metric and release Prime-LM, a large corpus where we control for various linguistic factors which interact with priming strength. We find that recent large Transformer models indeed show evidence of syntactic priming, but also that the syntactic generalisations learned by these models are to some extent modulated by semantic information. We report surprisingly strong priming effects when priming with multiple sentences, each with different words and meaning but with identical syntactic structure. We conclude that the syntactic priming paradigm is a highly useful, additional tool for gaining insights into the capacities of language models.


page 1

page 2

page 3

page 4


Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models

Neural language models (LMs) perform well on tasks that require sensitiv...

Probing for Incremental Parse States in Autoregressive Language Models

Next-word predictions from autoregressive neural language models show re...

Probing for Constituency Structure in Neural Language Models

In this paper, we investigate to which extent contextual neural language...

The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations

We introduce a new test of how well language models capture meaning in c...

Did the Cat Drink the Coffee? Challenging Transformers with Generalized Event Knowledge

Prior research has explored the ability of computational models to predi...

What can Neural Referential Form Selectors Learn?

Despite achieving encouraging results, neural Referring Expression Gener...