Exploring the Syntactic Abilities of RNNs with Multi-task Learning

06/12/2017
by   Emile Enguehard, et al.
0

Recent work has explored the syntactic abilities of RNNs using the subject-verb agreement task, which diagnoses sensitivity to sentence structure. RNNs performed this task well in common cases, but faltered in complex sentences (Linzen et al., 2016). We test whether these errors are due to inherent limitations of the architecture or to the relatively indirect supervision provided by most agreement dependencies in a corpus. We trained a single RNN to perform both the agreement task and an additional task, either CCG supertagging or language modeling. Multi-task training led to significantly lower error rates, in particular on complex sentences, suggesting that RNNs have the ability to evolve more sophisticated syntactic representations than shown before. We also show that easily available agreement training data can improve performance on other syntactic tasks, in particular when only a limited amount of training data is available for those tasks. The multi-task paradigm can also be leveraged to inject grammatical knowledge into language models.

READ FULL TEXT

page 5

page 6

research
07/18/2018

Distinct patterns of syntactic agreement errors in recurrent networks and humans

Determining the correct form of a verb in context requires an understand...
research
10/10/2020

Can RNNs trained on harder subject-verb agreement instances still perform well on easier ones?

The main subject and the associated verb in English must agree in gramma...
research
04/06/2020

An analysis of the utility of explicit negative examples to improve the syntactic abilities of neural language models

We explore the utilities of explicit negative examples in training neura...
research
03/15/2019

Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages

How do typological properties such as word order and morphological case ...
research
08/31/2018

What do RNN Language Models Learn about Filler-Gap Dependencies?

RNN language models have achieved state-of-the-art perplexity results an...
research
01/06/2021

Can RNNs learn Recursive Nested Subject-Verb Agreements?

One of the fundamental principles of contemporary linguistics states tha...
research
05/24/2019

What Syntactic Structures block Dependencies in RNN Language Models?

Recurrent Neural Networks (RNNs) trained on a language modeling task hav...

Please sign up or login with your details

Forgot password? Click here to reset