A Memory-Augmented Neural Network Model of Abstract Rule Learning

12/13/2020
by   Ishan Sinha, et al.
21

Human intelligence is characterized by a remarkable ability to infer abstract rules from experience and apply these rules to novel domains. As such, designing neural network algorithms with this capacity is an important step toward the development of deep learning systems with more human-like intelligence. However, doing so is a major outstanding challenge, one that some argue will require neural networks to use explicit symbol-processing mechanisms. In this work, we focus on neural networks' capacity for arbitrary role-filler binding, the ability to associate abstract "roles" to context-specific "fillers," which many have argued is an important mechanism underlying the ability to learn and apply rules abstractly. Using a simplified version of Raven's Progressive Matrices, a hallmark test of human intelligence, we introduce a sequential formulation of a visual problem-solving task that requires this form of binding. Further, we introduce the Emergent Symbol Binding Network (ESBN), a recurrent neural network model that learns to use an external memory as a binding mechanism. This mechanism enables symbol-like variable representations to emerge through the ESBN's training process without the need for explicit symbol-processing machinery. We empirically demonstrate that the ESBN successfully learns the underlying abstract rule structure of our task and perfectly generalizes this rule structure to novel fillers.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

page 10

page 11

page 20

12/29/2020

Emergent Symbols through Binding in External Memory

A key aspect of human intelligence is the ability to infer abstract rule...
07/27/2021

Pointer Value Retrieval: A new benchmark for understanding the limits of neural network generalization

The successes of deep learning critically rely on the ability of neural ...
05/21/2021

Modelling the development of counting with memory-augmented neural networks

Learning to count is an important example of the broader human capacity ...
03/22/2021

Raven's Progressive Matrices Completion with Latent Gaussian Process Priors

Abstract reasoning ability is fundamental to human intelligence. It enab...
07/10/2021

What underlies rapid learning and systematic generalization in humans

Despite the groundbreaking successes of neural networks, contemporary mo...
05/17/2021

Evolutionary Training and Abstraction Yields Algorithmic Generalization of Neural Computers

A key feature of intelligent behaviour is the ability to learn abstract ...
04/01/2022

Extracting Rules from Neural Networks with Partial Interpretations

We investigate the problem of extracting rules, expressed in Horn logic,...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.