Do RNNs learn human-like abstract word order preferences?

11/05/2018
by   Richard Futrell, et al.
0

RNN language models have achieved state-of-the-art results on various tasks, but what exactly they are representing about syntax is as yet unclear. Here we investigate whether RNN language models learn humanlike word order preferences in syntactic alternations. We collect language model surprisal scores for controlled sentence stimuli exhibiting major syntactic alternations in English: heavy NP shift, particle shift, the dative alternation, and the genitive alternation. We show that RNN language models reproduce human preferences in these alternations based on NP length, animacy, and definiteness. We collect human acceptability ratings for our stimuli, in the first acceptability judgment experiment directly manipulating the predictors of syntactic alternations. We show that the RNNs' performance is similar to the human acceptability ratings and is not matched by an n-gram baseline model. Our results show that RNNs learn the abstract features of weight, animacy, and definiteness which underlie soft constraints on syntactic alternations.

READ FULL TEXT

page 6

page 7

research
08/31/2018

What do RNN Language Models Learn about Filler-Gap Dependencies?

RNN language models have achieved state-of-the-art perplexity results an...
research
05/01/2020

Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment

A standard approach to evaluating language models analyzes how models as...
research
11/17/2022

Probing for Incremental Parse States in Autoregressive Language Models

Next-word predictions from autoregressive neural language models show re...
research
04/10/2020

Overestimation of Syntactic Representationin Neural Language Models

With the advent of powerful neural language models over the last few yea...
research
08/31/2018

Indicatements that character language models learn English morpho-syntactic units and regularities

Character language models have access to surface morphological patterns,...
research
01/12/2017

A Data-Oriented Model of Literary Language

We consider the task of predicting how literary a text is, with a gold s...
research
09/22/2022

Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks

Automatically predicting the outcome of subjective listening tests is a ...

Please sign up or login with your details

Forgot password? Click here to reset