Brain-like approaches to unsupervised learning of hidden representations – a comparative study

Unsupervised learning of hidden representations has been one of the most vibrant research directions in machine learning in recent years. In this work we study the brain-like Bayesian Confidence Propagating Neural Network (BCPNN) model, recently extended to extract sparse distributed high-dimensional representations. The saliency and separability of the hidden representations when trained on MNIST dataset is studied using an external classifier, and compared with other unsupervised learning methods that include restricted Boltzmann machines and autoencoders.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/27/2020

Learning representations in Bayesian Confidence Propagation neural networks

Unsupervised learning of hierarchical representations has been one of th...
research
07/20/2022

Can a Hebbian-like learning rule be avoiding the curse of dimensionality in sparse distributed data?

It is generally assumed that the brain uses something akin to sparse dis...
research
01/04/2021

Multi-Model Least Squares-Based Recomputation Framework for Large Data Analysis

Most multilayer least squares (LS)-based neural networks are structured ...
research
12/12/2014

Machine Learning for Neuroimaging with Scikit-Learn

Statistical machine learning methods are increasingly used for neuroimag...
research
05/26/2020

BHN: A Brain-like Heterogeneous Network

The human brain works in an unsupervised way, and more than one brain re...
research
03/10/2023

Tradeoff of generalization error in unsupervised learning

Finding the optimal model complexity that minimizes the generalization e...

Please sign up or login with your details

Forgot password? Click here to reset