A Normative Theory of Adaptive Dimensionality Reduction in Neural Networks

11/30/2015
by   Cengiz Pehlevan, et al.
0

To make sense of the world our brains must analyze high-dimensional datasets streamed by our sensory organs. Because such analysis begins with dimensionality reduction, modelling early sensory processing requires biologically plausible online dimensionality reduction algorithms. Recently, we derived such an algorithm, termed similarity matching, from a Multidimensional Scaling (MDS) objective function. However, in the existing algorithm, the number of output dimensions is set a priori by the number of output neurons and cannot be changed. Because the number of informative dimensions in sensory inputs is variable there is a need for adaptive dimensionality reduction. Here, we derive biologically plausible dimensionality reduction algorithms which adapt the number of output dimensions to the eigenspectrum of the input covariance matrix. We formulate three objective functions which, in the offline setting, are optimized by the projections of the input dataset onto its principal subspace scaled by the eigenvalues of the output covariance matrix. In turn, the output eigenvalues are computed as i) soft-thresholded, ii) hard-thresholded, iii) equalized thresholded eigenvalues of the input covariance matrix. In the online setting, we derive the three corresponding adaptive algorithms and map them onto the dynamics of neuronal activity in networks with biologically plausible local learning rules. Remarkably, in the last two networks, neurons are divided into two classes which we identify with principal neurons and interneurons in biological circuits.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/11/2016

Self-calibrating Neural Networks for Dimensionality Reduction

Recently, a novel family of biologically plausible online algorithms for...
research
10/16/2018

Biologically Plausible Online Principal Component Analysis Without Recurrent Neural Dynamics

Artificial neural networks that learn to perform Principal Component Ana...
research
02/10/2021

A Neural Network with Local Learning Rules for Minor Subspace Analysis

The development of neuromorphic hardware and modeling of biological neur...
research
10/23/2020

A simple normative network approximates local non-Hebbian learning in the cortex

To guide behavior, the brain extracts relevant features from high-dimens...
research
11/30/2015

Optimization theory of Hebbian/anti-Hebbian networks for PCA and whitening

In analyzing information streamed by sensory organs, our brains face cha...
research
08/06/2018

Efficient Principal Subspace Projection of Streaming Data Through Fast Similarity Matching

Big data problems frequently require processing datasets in a streaming ...
research
03/23/2017

Why do similarity matching objectives lead to Hebbian/anti-Hebbian networks?

Modeling self-organization of neural networks for unsupervised learning ...

Please sign up or login with your details

Forgot password? Click here to reset