Variable Binding for Sparse Distributed Representations: Theory and Applications

09/14/2020
by   E. Paxon Frady, et al.
0

Symbolic reasoning and neural networks are often considered incompatible approaches. Connectionist models known as Vector Symbolic Architectures (VSAs) can potentially bridge this gap. However, classical VSAs and neural networks are still considered incompatible. VSAs encode symbols by dense pseudo-random vectors, where information is distributed throughout the entire neuron population. Neural networks encode features locally, often forming sparse vectors of neural activation. Following Rachkovskij (2001); Laiho et al. (2015), we explore symbolic reasoning with sparse distributed representations. The core operations in VSAs are dyadic operations between vectors to express variable binding and the representation of sets. Thus, algebraic manipulations enable VSAs to represent and process data structures in a vector space of fixed dimensionality. Using techniques from compressed sensing, we first show that variable binding between dense vectors in VSAs is mathematically equivalent to tensor product binding between sparse vectors, an operation which increases dimensionality. This result implies that dimensionality-preserving binding for general sparse vectors must include a reduction of the tensor matrix into a single sparse vector. Two options for sparsity-preserving variable binding are investigated. One binding method for general sparse vectors extends earlier proposals to reduce the tensor product into a vector, such as circular convolution. The other method is only defined for sparse block-codes, block-wise circular convolution. Our experiments reveal that variable binding for block-codes has ideal properties, whereas binding for general sparse vectors also works, but is lossy, similar to previous proposals. We demonstrate a VSA with sparse block-codes in example applications, cognitive reasoning and classification, and discuss its relevance for neuroscience and neural networks.

READ FULL TEXT
research
03/24/2023

Factorizers for Distributed Sparse Block Codes

Distributed sparse block codes (SBCs) exhibit compact representations fo...
research
01/24/2023

Capacity Analysis of Vector Symbolic Architectures

Hyperdimensional computing (HDC) is a biologically-inspired framework wh...
research
07/07/2020

Resonator networks for factoring distributed representations of data structures

The ability to encode and manipulate data structures with distributed ne...
research
05/17/2023

Tensor Products and Hyperdimensional Computing

Following up on a previous analysis of graph embeddings, we generalize a...
research
07/05/2017

Theory of the superposition principle for randomized connectionist representations in neural networks

To understand cognitive reasoning in the brain, it has been proposed tha...
research
09/30/2020

Analyzing the Capacity of Distributed Vector Representations to Encode Spatial Information

Vector Symbolic Architectures belong to a family of related cognitive mo...
research
01/31/2020

A comparison of Vector Symbolic Architectures

Vector Symbolic Architectures (VSAs) combine a high-dimensional vector s...

Please sign up or login with your details

Forgot password? Click here to reset