Structural Supervision Improves Learning of Non-Local Grammatical Dependencies

03/03/2019
by   Ethan Wilcox, et al.
0

State-of-the-art LSTM language models trained on large corpora learn sequential contingencies in impressive detail, and have been shown to acquire a number of non-local grammatical dependencies with some success. Here we investigate whether supervision with hierarchical structure enhances learning of a range of grammatical dependencies, a question that has previously been addressed only for subject-verb agreement. Using controlled experimental methods from psycholinguistics, we compare the performance of word-based LSTM models versus Recurrent Neural Network Grammars (RNNGs) (Dyer et al., 2016), which represent hierarchical syntactic structure and use neural control to deploy it in left-to-right processing, on two classes of non-local grammatical dependencies in English -- Negative Polarity licensing and filler-gap Dependencies -- tested in a range of configurations. Using the same training data for both models, we find that the RNNG outperforms the LSTM on both types of grammatical dependencies and even learns many of the Island Constraints on the filler-gap dependency. Structural supervision thus provides data efficiency advantages over purely string-based training of neural language models in acquiring human-like generalizations about non-local grammatical dependencies.

READ FULL TEXT

page 5

page 8

research
05/24/2019

What Syntactic Structures block Dependencies in RNN Language Models?

Recurrent Neural Networks (RNNs) trained on a language modeling task hav...
research
09/22/2021

Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese Language Models

Prior work has shown that structural supervision helps English language ...
research
04/30/2020

Attribution Analysis of Grammatical Dependencies in LSTMs

LSTM language models have been shown to capture syntax-sensitive grammat...
research
11/04/2016

Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies

The success of long short-term memory (LSTM) neural networks in language...
research
10/12/2020

Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models

Humans can learn structural properties about a word from minimal experie...
research
10/31/2018

GraphIE: A Graph-Based Framework for Information Extraction

Most modern Information Extraction (IE) systems are implemented as seque...
research
05/03/2020

Influence Paths for Characterizing Subject-Verb Number Agreement in LSTM Language Models

LSTM-based recurrent neural networks are the state-of-the-art for many n...

Please sign up or login with your details

Forgot password? Click here to reset