Colorless green recurrent networks dream hierarchically

03/29/2018
by   Kristina Gulordava, et al.
0

Recurrent neural networks (RNNs) have achieved impressive results in a variety of linguistic processing tasks, suggesting that they can induce non-trivial properties of language. We investigate here to what extent RNNs learn to track abstract hierarchical syntactic structure. We test whether RNNs trained with a generic language modeling objective in four languages (Italian, English, Hebrew, Russian) can predict long-distance number agreement in various constructions. We include in our evaluation nonsensical sentences where RNNs cannot rely on semantic or lexical cues ("The colorless green ideas I ate with the chair sleep furiously"), and, for Italian, we compare model performance to human intuitions. Our language-model-trained RNNs make reliable predictions about long-distance agreement, and do not lag much behind human performance. We thus bring support to the hypothesis that RNNs are not just shallow-pattern extractors, but they also acquire deeper grammatical competence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/15/2019

Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages

How do typological properties such as word order and morphological case ...
research
07/18/2018

Distinct patterns of syntactic agreement errors in recurrent networks and humans

Determining the correct form of a verb in context requires an understand...
research
01/06/2021

Can RNNs learn Recursive Nested Subject-Verb Agreements?

One of the fundamental principles of contemporary linguistics states tha...
research
02/25/2018

Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks

Syntactic rules in human language usually refer to the hierarchical stru...
research
06/09/2023

Language Models Can Learn Exceptions to Syntactic Rules

Artificial neural networks can generalize productively to novel contexts...
research
02/29/2016

Representation of linguistic form and function in recurrent neural networks

We present novel methods for analyzing the activation patterns of RNNs f...
research
09/05/2018

RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency

Recurrent neural networks (RNNs) are the state of the art in sequence mo...

Please sign up or login with your details

Forgot password? Click here to reset