On the Practical Ability of Recurrent Neural Networks to Recognize Hierarchical Languages

11/08/2020
by   Satwik Bhattamishra, et al.
0

While recurrent models have been effective in NLP tasks, their performance on context-free languages (CFLs) has been found to be quite weak. Given that CFLs are believed to capture important phenomena such as hierarchical structure in natural languages, this discrepancy in performance calls for an explanation. We study the performance of recurrent models on Dyck-n languages, a particularly important and well-studied class of CFLs. We find that while recurrent models generalize nearly perfectly if the lengths of the training and test strings are from the same range, they perform poorly if the test strings are longer. At the same time, we observe that recurrent models are expressive enough to recognize Dyck words of arbitrary lengths in finite precision if their depths are bounded. Hence, we evaluate our models on samples generated from Dyck languages with bounded depth and find that they are indeed able to generalize to much higher lengths. Since natural language datasets have nested dependencies of bounded depth, this may help explain why they perform well in modeling hierarchical dependencies in natural language data despite prior works indicating poor generalization performance on Dyck languages. We perform probing studies to support our results and provide comparisons with Transformers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/02/2023

Evaluating Transformer's Ability to Learn Mildly Context-Sensitive Languages

Despite that Transformers perform well in NLP tasks, recent studies sugg...
research
05/24/2021

Self-Attention Networks Can Process Bounded Hierarchical Languages

Despite their impressive performance in NLP, self-attention networks wer...
research
10/15/2020

RNNs can generate bounded hierarchical languages with optimal memory

Recurrent neural networks empirically generate natural language with hig...
research
08/15/2018

Using Regular Languages to Explore the Representational Capacity of Recurrent Neural Architectures

The presence of Long Distance Dependencies (LDDs) in sequential data pos...
research
11/22/2022

Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions

Despite the widespread success of Transformers on NLP tasks, recent work...
research
10/09/2018

Learning Noun Cases Using Sequential Neural Networks

Morphological declension, which aims to inflect nouns to indicate number...
research
02/24/2022

Overcoming a Theoretical Limitation of Self-Attention

Although transformers are remarkably effective for many tasks, there are...

Please sign up or login with your details

Forgot password? Click here to reset