DeepAI AI Chat
Log In Sign Up

What Syntactic Structures block Dependencies in RNN Language Models?

05/24/2019
by   Ethan Wilcox, et al.
University of California, Irvine
MIT
Harvard University
0

Recurrent Neural Networks (RNNs) trained on a language modeling task have been shown to acquire a number of non-local grammatical dependencies with some success. Here, we provide new evidence that RNN language models are sensitive to hierarchical syntactic structure by investigating the filler--gap dependency and constraints on it, known as syntactic islands. Previous work is inconclusive about whether RNNs learn to attenuate their expectations for gaps in island constructions in particular or in any sufficiently complex syntactic environment. This paper gives new evidence for the former by providing control studies that have been lacking so far. We demonstrate that two state-of-the-art RNN models are are able to maintain the filler--gap dependency through unbounded sentential embeddings and are also sensitive to the hierarchical relationship between the filler and the gap. Next, we demonstrate that the models are able to maintain possessive pronoun gender expectations through island constructions---this control case rules out the possibility that island constructions block all information flow in these networks. We also evaluate three untested islands constraints: coordination islands, left branch islands, and sentential subject islands. Models are able to learn left branch islands and learn coordination islands gradiently, but fail to learn sentential subject islands. Through these controls and new tests, we provide evidence that model behavior is due to finer-grained expectations than gross syntactic complexity, but also that the models are conspicuously un-humanlike in some of their performance characteristics.

READ FULL TEXT

page 4

page 5

08/31/2018

What do RNN Language Models Learn about Filler-Gap Dependencies?

RNN language models have achieved state-of-the-art perplexity results an...
03/03/2019

Structural Supervision Improves Learning of Non-Local Grammatical Dependencies

State-of-the-art LSTM language models trained on large corpora learn seq...
06/10/2019

Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations

Deep learning sequence models have led to a marked increase in performan...
09/10/2019

Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study

Neural language models have achieved state-of-the-art performances on ma...
09/05/2018

RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency

Recurrent neural networks (RNNs) are the state of the art in sequence mo...
11/05/2018

Do RNNs learn human-like abstract word order preferences?

RNN language models have achieved state-of-the-art results on various ta...