Log In Sign Up

Understanding Robust Generalization in Learning Regular Languages

by   Soham Dan, et al.

A key feature of human intelligence is the ability to generalize beyond the training distribution, for instance, parsing longer sentences than seen in the past. Currently, deep neural networks struggle to generalize robustly to such shifts in the data distribution. We study robust generalization in the context of using recurrent neural networks (RNNs) to learn regular languages. We hypothesize that standard end-to-end modeling strategies cannot generalize well to systematic distribution shifts and propose a compositional strategy to address this. We compare an end-to-end strategy that maps strings to labels with a compositional strategy that predicts the structure of the deterministic finite-state automaton (DFA) that accepts the regular language. We theoretically prove that the compositional strategy generalizes significantly better than the end-to-end strategy. In our experiments, we implement the compositional strategy via an auxiliary task where the goal is to predict the intermediate states visited by the DFA when parsing a string. Our empirical results support our hypothesis, showing that auxiliary tasks can enable robust generalization. Interestingly, the end-to-end RNN generalizes significantly better than the theoretical lower bound, suggesting that it is able to achieve at least some degree of robust generalization.


Making Transformers Solve Compositional Tasks

Several studies have reported the inability of Transformer models to gen...

Memorize or generalize? Searching for a compositional RNN in a haystack

Neural networks are very powerful learning systems, but they do not read...

Compositional Networks Enable Systematic Generalization for Grounded Language Understanding

Humans are remarkably flexible when understanding new sentences that inc...

Emergent Language Generalization and Acquisition Speed are not tied to Compositionality

Studies of discrete languages emerging when neural agents communicate to...

Exploring the Memorization-Generalization Continuum in Deep Learning

Human learners appreciate that some facts demand memorization whereas ot...

Robust Generalization of Quadratic Neural Networks via Function Identification

A key challenge facing deep learning is that neural networks are often n...