Probing for Incremental Parse States in Autoregressive Language Models

11/17/2022
by   Tiwalayo Eisape, et al.
0

Next-word predictions from autoregressive neural language models show remarkable sensitivity to syntax. This work evaluates the extent to which this behavior arises as a result of a learned ability to maintain implicit representations of incremental syntactic structures. We extend work in syntactic probing to the incremental setting and present several probes for extracting incomplete syntactic structure (operationalized through parse states from a stack-based parser) from autoregressive language models. We find that our probes can be used to predict model preferences on ambiguous sentence prefixes and causally intervene on model representations and steer model behavior. This suggests implicit incremental syntactic inferences underlie next-word predictions in autoregressive neural language models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2021

Syntactic Persistence in Language Models: Priming as a Window into Abstract Language Representations

We investigate the extent to which modern, neural language models are su...
research
06/06/2021

A Targeted Assessment of Incremental Processing in Neural LanguageModels and Humans

We present a targeted, scaled-up comparison of incremental processing in...
research
09/23/2019

Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models

Neural language models (LMs) perform well on tasks that require sensitiv...
research
04/10/2020

Overestimation of Syntactic Representationin Neural Language Models

With the advent of powerful neural language models over the last few yea...
research
11/05/2018

Do RNNs learn human-like abstract word order preferences?

RNN language models have achieved state-of-the-art results on various ta...
research
04/13/2022

Probing for Constituency Structure in Neural Language Models

In this paper, we investigate to which extent contextual neural language...
research
06/11/2018

Finding Syntax in Human Encephalography with Beam Search

Recurrent neural network grammars (RNNGs) are generative models of (tree...

Please sign up or login with your details

Forgot password? Click here to reset