Syntactic Persistence in Language Models: Priming as a Window into Abstract Language Representations

09/30/2021
by   Arabella Sinclair, et al.
0

We investigate the extent to which modern, neural language models are susceptible to syntactic priming, the phenomenon where the syntactic structure of a sentence makes the same structure more probable in a follow-up sentence. We explore how priming can be used to study the nature of the syntactic knowledge acquired by these models. We introduce a novel metric and release Prime-LM, a large corpus where we control for various linguistic factors which interact with priming strength. We find that recent large Transformer models indeed show evidence of syntactic priming, but also that the syntactic generalisations learned by these models are to some extent modulated by semantic information. We report surprisingly strong priming effects when priming with multiple sentences, each with different words and meaning but with identical syntactic structure. We conclude that the syntactic priming paradigm is a highly useful, additional tool for gaining insights into the capacities of language models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2019

Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models

Neural language models (LMs) perform well on tasks that require sensitiv...
research
11/17/2022

Probing for Incremental Parse States in Autoregressive Language Models

Next-word predictions from autoregressive neural language models show re...
research
04/13/2022

Probing for Constituency Structure in Neural Language Models

In this paper, we investigate to which extent contextual neural language...
research
11/07/2015

The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations

We introduce a new test of how well language models capture meaning in c...
research
07/22/2021

Did the Cat Drink the Coffee? Challenging Transformers with Generalized Event Knowledge

Prior research has explored the ability of computational models to predi...
research
08/15/2021

What can Neural Referential Form Selectors Learn?

Despite achieving encouraging results, neural Referring Expression Gener...
research
10/21/2022

Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities

Humans exhibit garden path effects: When reading sentences that are temp...

Please sign up or login with your details

Forgot password? Click here to reset