Superposed Episodic and Semantic Memory via Sparse Distributed Representation

10/21/2017
by   Rod Rinkus, et al.
0

The abilities to perceive, learn, and use generalities, similarities, classes, i.e., semantic memory (SM), is central to cognition. Machine learning (ML), neural network, and AI research has been primarily driven by tasks requiring such abilities. However, another central facet of cognition, single-trial formation of permanent memories of experiences, i.e., episodic memory (EM), has had relatively little focus. Only recently has EM-like functionality been added to Deep Learning (DL) models, e.g., Neural Turing Machine, Memory Networks. However, in these cases: a) EM is implemented as a separate module, which entails substantial data movement (and so, time and power) between the DL net itself and EM; and b) individual items are stored localistically within the EM, precluding realizing the exponential representational efficiency of distributed over localist coding. We describe Sparsey, an unsupervised, hierarchical, spatial/spatiotemporal associative memory model differing fundamentally from mainstream ML models, most crucially, in its use of sparse distributed representations (SDRs), or, cell assemblies, which admits an extremely efficient, single-trial learning algorithm that maps input similarity into code space similarity (measured as intersection). SDRs of individual inputs are stored in superposition and because similarity is preserved, the patterns of intersections over the assigned codes reflect the similarity, i.e., statistical, structure, of all orders, not simply pairwise, over the inputs. Thus, SM, i.e., a generative model, is built as a computationally free side effect of the act of storing episodic memory traces of individual inputs, either spatial patterns or sequences. We report initial results on MNIST and on the Weizmann video event recognition benchmarks. While we have not yet attained SOTA class accuracy, learning takes only minutes on a single CPU.

READ FULL TEXT

page 2

page 4

page 6

research
10/19/2020

Efficient Similarity-Preserving Unsupervised Learning using Modular Sparse Distributed Codes and Novelty-Contingent Noise

There is increasing realization in neuroscience that information is repr...
research
11/12/2016

Sparsey: Event Recognition via Deep Hierarchical Spare Distributed Codes

Visual cortex's hierarchical, multi-level organization is captured in ma...
research
02/11/2021

Artificial Intelligence Advances for De Novo Molecular Structure Modeling in Cryo-EM

Cryo-electron microscopy (cryo-EM) has become a major experimental techn...
research
11/12/2020

EM-X-DL: Efficient Cross-Device Deep Learning Side-Channel Attack with Noisy EM Signatures

This work presents a Cross-device Deep-Learning based Electromagnetic (E...
research
08/26/2019

Spatiotemporal PET reconstruction using ML-EM with learned diffeomorphic deformation

Patient movement in emission tomography deteriorates reconstruction qual...
research
10/02/2017

DeepER -- Deep Entity Resolution

Entity Resolution (ER) is a fundamental problem with many applications. ...
research
12/13/2017

Evolving Unsupervised Deep Neural Networks for Learning Meaningful Representations

Deep Learning (DL) aims at learning the meaningful representations. A me...

Please sign up or login with your details

Forgot password? Click here to reset