Theory of the superposition principle for randomized connectionist representations in neural networks

07/05/2017
by   E. Paxon Frady, et al.
0

To understand cognitive reasoning in the brain, it has been proposed that symbols and compositions of symbols are represented by activity patterns (vectors) in a large population of neurons. Formal models implementing this idea [Plate 2003], [Kanerva 2009], [Gayler 2003], [Eliasmith 2012] include a reversible superposition operation for representing with a single vector an entire set of symbols or an ordered sequence of symbols. If the representation space is high-dimensional, large sets of symbols can be superposed and individually retrieved. However, crosstalk noise limits the accuracy of retrieval and information capacity. To understand information processing in the brain and to design artificial neural systems for cognitive reasoning, a theory of this superposition operation is essential. Here, such a theory is presented. The superposition operations in different existing models are mapped to linear neural networks with unitary recurrent matrices, in which retrieval accuracy can be analyzed by a single equation. We show that networks representing information in superposition can achieve a channel capacity of about half a bit per neuron, a significant fraction of the total available entropy. Going beyond existing models, superposition operations with recency effects are proposed that avoid catastrophic forgetting when representing the history of infinite data streams. These novel models correspond to recurrent networks with non-unitary matrices or with nonlinear neurons, and can be analyzed and optimized with an extension of our theory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2018

A theory of sequence indexing and working memory in recurrent neural networks

To accommodate structured approaches of neural computation, we propose a...
research
04/13/2023

Emergence of Symbols in Neural Networks for Semantic Understanding and Communication

The capacity to generate meaningful symbols and effectively employ them ...
research
02/17/2022

Entropic Associative Memory for Manuscript Symbols

Manuscript symbols can be stored, recognized and retrieved from an entro...
research
09/14/2020

Variable Binding for Sparse Distributed Representations: Theory and Applications

Symbolic reasoning and neural networks are often considered incompatible...
research
05/17/2021

Livewired Neural Networks: Making Neurons That Fire Together Wire Together

Until recently, artificial neural networks were typically designed with ...
research
03/29/2019

Out-of-the box neural networks can support combinatorial generalization

Combinatorial generalization - the ability to understand and produce nov...
research
09/30/2020

Analyzing the Capacity of Distributed Vector Representations to Encode Spatial Information

Vector Symbolic Architectures belong to a family of related cognitive mo...

Please sign up or login with your details

Forgot password? Click here to reset