
Pursuit of LowRank Models of TimeVarying Matrices Robust to Sparse and Measurement Noise
In tracking of timevarying lowrank models of timevarying matrices, we...
read it

Robust computation with rhythmic spike patterns
Information coding by precise timing of spikes can be faster and more en...
read it

Spiking Neural Network on Neuromorphic Hardware for EnergyEfficient Unidimensional SLAM
Energyefficient simultaneous localization and mapping (SLAM) is crucial...
read it

Online computation of sparse representations of time varying stimuli using a biologically motivated neural network
Natural stimuli are highly redundant, possessing significant spatial and...
read it

An Introduction to Probabilistic Spiking Neural Networks
Spiking neural networks (SNNs) are distributed trainable systems whose c...
read it

On the Algorithmic Power of Spiking Neural Networks
Spiking Neural Networks (SNN) are mathematical models in neuroscience to...
read it
A network of spiking neurons for computing sparse representations in an energy efficient way
Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via lowbandwidth channels. HDA nodes perform both gradientdescentlike steps on analog internal variables and coordinatedescentlike steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrateandfire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime the representation error of HDA decays with time, t, as 1/t. HDA is stable against timevarying noise, specifically, the representation error decays as 1/sqrt(t) for Gaussian white noise.
READ FULL TEXT
Comments
There are no comments yet.