DeepAI AI Chat
Log In Sign Up

What Syntactic Structures block Dependencies in RNN Language Models?

by   Ethan Wilcox, et al.
University of California, Irvine
Harvard University

Recurrent Neural Networks (RNNs) trained on a language modeling task have been shown to acquire a number of non-local grammatical dependencies with some success. Here, we provide new evidence that RNN language models are sensitive to hierarchical syntactic structure by investigating the filler--gap dependency and constraints on it, known as syntactic islands. Previous work is inconclusive about whether RNNs learn to attenuate their expectations for gaps in island constructions in particular or in any sufficiently complex syntactic environment. This paper gives new evidence for the former by providing control studies that have been lacking so far. We demonstrate that two state-of-the-art RNN models are are able to maintain the filler--gap dependency through unbounded sentential embeddings and are also sensitive to the hierarchical relationship between the filler and the gap. Next, we demonstrate that the models are able to maintain possessive pronoun gender expectations through island constructions---this control case rules out the possibility that island constructions block all information flow in these networks. We also evaluate three untested islands constraints: coordination islands, left branch islands, and sentential subject islands. Models are able to learn left branch islands and learn coordination islands gradiently, but fail to learn sentential subject islands. Through these controls and new tests, we provide evidence that model behavior is due to finer-grained expectations than gross syntactic complexity, but also that the models are conspicuously un-humanlike in some of their performance characteristics.


page 4

page 5


What do RNN Language Models Learn about Filler-Gap Dependencies?

RNN language models have achieved state-of-the-art perplexity results an...

Structural Supervision Improves Learning of Non-Local Grammatical Dependencies

State-of-the-art LSTM language models trained on large corpora learn seq...

Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations

Deep learning sequence models have led to a marked increase in performan...

Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study

Neural language models have achieved state-of-the-art performances on ma...

RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency

Recurrent neural networks (RNNs) are the state of the art in sequence mo...

Do RNNs learn human-like abstract word order preferences?

RNN language models have achieved state-of-the-art results on various ta...