Influence Paths for Characterizing Subject-Verb Number Agreement in LSTM Language Models

05/03/2020
by   Kaiji Lu, et al.
0

LSTM-based recurrent neural networks are the state-of-the-art for many natural language processing (NLP) tasks. Despite their performance, it is unclear whether, or how, LSTMs learn structural features of natural languages such as subject-verb number agreement in English. Lacking this understanding, the generality of LSTM performance on this task and their suitability for related tasks remains uncertain. Further, errors cannot be properly attributed to a lack of structural capability, training data omissions, or other exceptional faults. We introduce *influence paths*, a causal account of structural properties as carried by paths across gates and neurons of a recurrent neural network. The approach refines the notion of influence (the subject's grammatical number has influence on the grammatical number of the subsequent verb) into a set of gate or neuron-level paths. The set localizes and segments the concept (e.g., subject-verb agreement), its constituent elements (e.g., the subject), and related or interfering elements (e.g., attractors). We exemplify the methodology on a widely-studied multi-layer LSTM language model, demonstrating its accounting for subject-verb number agreement. The results offer both a finer and a more complete view of an LSTM's handling of this structural aspect of the English language than prior results based on diagnostic classifiers and ablation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/24/2018

Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information

How do neural language models keep track of number agreement between sub...
research
05/29/2018

LSTMs Exploit Linguistic Attributes of Data

While recurrent neural networks have found success in a variety of natur...
research
09/11/2018

Can LSTM Learn to Capture Agreement? The Case of Basque

Sequential neural networks models are powerful tools in a variety of Nat...
research
11/02/2020

Abstracting Influence Paths for Explaining (Contextualization of) BERT Models

While "attention is all you need" may be proving true, we do not yet kno...
research
03/03/2019

Structural Supervision Improves Learning of Non-Local Grammatical Dependencies

State-of-the-art LSTM language models trained on large corpora learn seq...
research
10/10/2020

Can RNNs trained on harder subject-verb agreement instances still perform well on easier ones?

The main subject and the associated verb in English must agree in gramma...
research
01/06/2021

Can RNNs learn Recursive Nested Subject-Verb Agreements?

One of the fundamental principles of contemporary linguistics states tha...

Please sign up or login with your details

Forgot password? Click here to reset