Memorize or generalize? Searching for a compositional RNN in a haystack

02/18/2018
by   Adam Liska, et al.
0

Neural networks are very powerful learning systems, but they do not readily generalize from one task to the other. This is partly due to the fact that they do not learn in a compositional way, that is, by discovering skills that are shared by different tasks, and recombining them to solve new problems. In this paper, we explore the compositional generalization capabilities of recurrent neural networks (RNNs). We first propose the lookup table composition domain as a simple setup to test compositional behaviour and show that it is theoretically possible for a standard RNN to learn to behave compositionally in this domain when trained with standard gradient descent and provided with additional supervision. We then remove this additional supervision and perform a search over a large number of model initializations to investigate the proportion of RNNs that can still converge to a compositional solution. We discover that a small but non-negligible proportion of RNNs do reach partial compositional solutions even without special architectural constraints. This suggests that a combination of gradient descent and evolutionary strategies directly favouring the minority models that developed more compositional approaches might suffice to lead standard RNNs towards compositional solutions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/31/2017

Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks

Humans can understand and produce new utterances effortlessly, thanks to...
research
05/21/2019

CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks

Lake and Baroni (2018) introduced the SCAN dataset probing the ability o...
research
02/20/2022

Understanding Robust Generalization in Learning Regular Languages

A key feature of human intelligence is the ability to generalize beyond ...
research
09/04/2023

Gated recurrent neural networks discover attention

Recent architectural developments have enabled recurrent neural networks...
research
12/20/2018

RNNs Implicitly Implement Tensor Product Representations

Recurrent neural networks (RNNs) can learn continuous vector representat...
research
02/09/2022

On the Implicit Bias of Gradient Descent for Temporal Extrapolation

Common practice when using recurrent neural networks (RNNs) is to apply ...
research
06/04/2019

On the Realization of Compositionality in Neural Networks

We present a detailed comparison of two types of sequence to sequence mo...

Please sign up or login with your details

Forgot password? Click here to reset