On the Realization of Compositionality in Neural Networks

06/04/2019
by   Joris Baan, et al.
0

We present a detailed comparison of two types of sequence to sequence models trained to conduct a compositional task. The models are architecturally identical at inference time, but differ in the way that they are trained: our baseline model is trained with a task-success signal only, while the other model receives additional supervision on its attention mechanism (Attentive Guidance), which has shown to be an effective method for encouraging more compositional solutions (Hupkes et al.,2019). We first confirm that the models with attentive guidance indeed infer more compositional solutions than the baseline, by training them on the lookup table task presented by Liška et al. (2019). We then do an in-depth analysis of the structural differences between the two model types, focusing in particular on the organisation of the parameter space and the hidden layer activations and find noticeable differences in both these aspects. Guided networks focus more on the components of the input rather than the sequence as a whole and develop small functional groups of neurons with specific purposes that use their gates more selectively. Results from parameter heat maps, component swapping and graph analysis also indicate that guided networks exhibit a more modular structure with a small number of specialized, strongly connected neurons.

READ FULL TEXT
research
05/20/2018

Learning compositionally through attentive guidance

In this paper, we introduce Attentive Guidance (AG), a new mechanism to ...
research
06/04/2019

Transcoding compositionally: using attention to find more generalizable solutions

While sequence-to-sequence models have shown remarkable generalization p...
research
10/06/2022

Compositional Generalisation with Structured Reordering and Fertility Layers

Seq2seq models have been shown to struggle with compositional generalisa...
research
04/29/2020

Normalizing Compositional Structures Across Graphbanks

The emergence of a variety of graph-based meaning representations (MRs) ...
research
09/29/2020

Think before you act: A simple baseline for compositional generalization

Contrarily to humans who have the ability to recombine familiar expressi...
research
02/18/2018

Memorize or generalize? Searching for a compositional RNN in a haystack

Neural networks are very powerful learning systems, but they do not read...
research
11/28/2017

Crossmodal Attentive Skill Learner

This paper presents the Crossmodal Attentive Skill Learner (CASL), integ...

Please sign up or login with your details

Forgot password? Click here to reset