A neuro-inspired architecture for unsupervised continual learning based on online clustering and hierarchical predictive coding

10/22/2018
by   Constantine Dovrolis, et al.
0

We propose that the Continual Learning desiderata can be achieved through a neuro-inspired architecture, grounded on Mountcastle's cortical column hypothesis. The proposed architecture involves a single module, called Self-Taught Associative Memory (STAM), which models the function of a cortical column. STAMs are repeated in multi-level hierarchies involving feedforward, lateral and feedback connections. STAM networks learn in an unsupervised manner, based on a combination of online clustering and hierarchical predictive coding. This short paper only presents the architecture and its connections with neuroscience. A mathematical formulation and experimental results will be presented in an extended version of this paper.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/03/2019

Unsupervised Continual Learning and Self-Taught Associative Memory Hierarchies

We first pose the Unsupervised Continual Learning (UCL) problem: learnin...
research
09/13/2023

Domain-Aware Augmentations for Unsupervised Online General Continual Learning

Continual Learning has been challenging, especially when dealing with un...
research
01/26/2022

Adaptive Resonance Theory-based Topological Clustering with a Divisive Hierarchical Structure Capable of Continual Learning

Adaptive Resonance Theory (ART) is considered as an effective approach f...
research
04/25/2020

A Neuromorphic Paradigm for Online Unsupervised Clustering

A computational paradigm based on neuroscientific concepts is proposed a...
research
03/18/2022

Class-wise Classifier Design Capable of Continual Learning using Adaptive Resonance Theory-based Topological Clustering

This paper proposes a supervised classification algorithm capable of con...
research
03/27/2023

Forget-free Continual Learning with Soft-Winning SubNetworks

Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which states t...
research
11/30/2017

Learning to Adapt by Minimizing Discrepancy

We explore whether useful temporal neural generative models can be learn...

Please sign up or login with your details

Forgot password? Click here to reset