Brain-like combination of feedforward and recurrent network components achieves prototype extraction and robust pattern recognition

Associative memory has been a prominent candidate for the computation performed by the massively recurrent neocortical networks. Attractor networks implementing associative memory have offered mechanistic explanation for many cognitive phenomena. However, attractor memory models are typically trained using orthogonal or random patterns to avoid interference between memories, which makes them unfeasible for naturally occurring complex correlated stimuli like images. We approach this problem by combining a recurrent attractor network with a feedforward network that learns distributed representations using an unsupervised Hebbian-Bayesian learning rule. The resulting network model incorporates many known biological properties: unsupervised learning, Hebbian plasticity, sparse distributed activations, sparse connectivity, columnar and laminar cortical architecture, etc. We evaluate the synergistic effects of the feedforward and recurrent network components in complex pattern recognition tasks on the MNIST handwritten digits dataset. We demonstrate that the recurrent attractor component implements associative memory when trained on the feedforward-driven internal (hidden) representations. The associative memory is also shown to perform prototype extraction from the training data and make the representations robust to severely distorted input. We argue that several aspects of the proposed integration of feedforward and recurrent computations are particularly attractive from a machine learning perspective.

READ FULL TEXT
research
10/09/2015

Feedforward Sequential Memory Neural Networks without Recurrent Feedback

We introduce a new structure for memory neural networks, called feedforw...
research
05/06/2020

Brain-like approaches to unsupervised learning of hidden representations – a comparative study

Unsupervised learning of hidden representations has been one of the most...
research
12/12/2020

Learning Representations from Temporally Smooth Data

Events in the real world are correlated across nearby points in time, an...
research
11/08/2018

Linear Memory Networks

Recurrent neural networks can learn complex transduction problems that r...
research
06/03/2016

Dense Associative Memory for Pattern Recognition

A model of associative memory is studied, which stores and reliably retr...
research
04/06/2020

Large-scale spatiotemporal photonic reservoir computer for image classification

We propose a scalable photonic architecture for implementation of feedfo...
research
12/13/2011

Supervised Generative Reconstruction: An Efficient Way To Flexibly Store and Recognize Patterns

Matching animal-like flexibility in recognition and the ability to quick...

Please sign up or login with your details

Forgot password? Click here to reset