Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study

09/10/2019
by   Aixiu An, et al.
0

Neural language models have achieved state-of-the-art performances on many NLP tasks, and recently have been shown to learn a number of hierarchically-sensitive syntactic dependencies between individual words. However, equally important for language processing is the ability to combine words into phrasal constituents, and use constituent-level features to drive downstream expectations. Here we investigate neural models' ability to represent constituent-level features, using coordinated noun phrases as a case study. We assess whether different neural language models trained on English and French represent phrase-level number and gender features, and use those features to drive downstream expectations. Our results suggest that models use a linear combination of NP constituent number to drive CoordNP/verb number agreement. This behavior is highly regular and even sensitive to local syntactic context, however it differs crucially from observed human behavior. Models have less success with gender agreement. Models trained on large corpora perform best, and there is no obvious advantage for models trained using explicit syntactic supervision.

READ FULL TEXT
research
05/29/2018

Visually Grounded, Situated Learning in Neural Models

The theory of situated cognition postulates that language is inseparable...
research
10/12/2020

Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models

Humans can learn structural properties about a word from minimal experie...
research
05/24/2019

What Syntactic Structures block Dependencies in RNN Language Models?

Recurrent Neural Networks (RNNs) trained on a language modeling task hav...
research
08/31/2018

What do RNN Language Models Learn about Filler-Gap Dependencies?

RNN language models have achieved state-of-the-art perplexity results an...
research
11/07/2015

The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations

We introduce a new test of how well language models capture meaning in c...
research
09/11/2018

Can LSTM Learn to Capture Agreement? The Case of Basque

Sequential neural networks models are powerful tools in a variety of Nat...
research
06/10/2019

Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations

Deep learning sequence models have led to a marked increase in performan...

Please sign up or login with your details

Forgot password? Click here to reset