The compositionality of neural networks: integrating symbolism and connectionism

by   Dieuwke Hupkes, et al.

Despite a multitude of empirical studies, little consensus exists on whether neural networks are able to generalise compositionally, a controversy that, in part, stems from a lack of agreement about what it means for a neural model to be compositional. As a response to this controversy, we present a set of tests that provide a bridge between, on the one hand, the vast amount of linguistic and philosophical theory about compositionality and, on the other, the successful neural models of language. We collect different interpretations of compositionality and translate them into five theoretically grounded tests that are formulated on a task-independent level. In particular, we provide tests to investigate (i) if models systematically recombine known parts and rules (ii) if models can extend their predictions beyond the length they have seen in the training data (iii) if models' composition operations are local or global (iv) if models' predictions are robust to synonym substitutions and (v) if models favour rules or exceptions during training. To demonstrate the usefulness of this evaluation paradigm, we instantiate these five tests on a highly compositional data set which we dub PCFG SET and apply the resulting tests to three popular sequence-to-sequence models: a recurrent, a convolution based and a transformer model. We provide an in depth analysis of the results, that uncover the strengths and weaknesses of these three architectures and point to potential areas of improvement.



page 31


Compositional generalization through meta sequence-to-sequence learning

People can learn a new concept and use it compositionally, understanding...

Sequence-Level Mixed Sample Data Augmentation

Despite their empirical success, neural networks still have difficulty c...

On Compositionality in Neural Machine Translation

We investigate two specific manifestations of compositionality in Neural...

A Study of Compositional Generalization in Neural Models

Compositional and relational learning is a hallmark of human intelligenc...

The paradox of the compositionality of natural language: a neural machine translation case study

Moving towards human-like linguistic performance is often argued to requ...

Compositional Processing Emerges in Neural Networks Solving Math Problems

A longstanding question in cognitive science concerns the learning mecha...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.