Spikes as regularizers

11/18/2016
by   Anders Søgaard, et al.
0

We present a confidence-based single-layer feed-forward learning algorithm SPIRAL (Spike Regularized Adaptive Learning) relying on an encoding of activation spikes. We adaptively update a weight vector relying on confidence estimates and activation offsets relative to previous activity. We regularize updates proportionally to item-level confidence and weight-specific support, loosely inspired by the observation from neurophysiology that high spike rates are sometimes accompanied by low temporal precision. Our experiments suggest that the new learning algorithm SPIRAL is more robust and less prone to overfitting than both the averaged perceptron and AROW.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/11/2022

Ensemble plasticity and network adaptability in SNNs

Artificial Spiking Neural Networks (ASNNs) promise greater information p...
research
01/10/2022

Quantum activation functions for quantum neural networks

The field of artificial neural networks is expected to strongly benefit ...
research
06/17/2015

Learning Spike time codes through Morphological Learning with Binary Synapses

In this paper, a neuron with nonlinear dendrites (NNLD) and binary synap...
research
03/10/2021

Linear Constraints Learning for Spiking Neurons

Encoding information with precise spike timings using spike-coded neuron...
research
04/16/2018

MaxGain: Regularisation of Neural Networks by Constraining Activation Magnitudes

Effective regularisation of neural networks is essential to combat overf...
research
07/15/2023

Custom DNN using Reward Modulated Inverted STDP Learning for Temporal Pattern Recognition

Temporal spike recognition plays a crucial role in various domains, incl...

Please sign up or login with your details

Forgot password? Click here to reset