Supervised Learning in Temporally-Coded Spiking Neural Networks with Approximate Backpropagation

07/27/2020
by   Andrew Stephan, et al.
11

In this work we propose a new supervised learning method for temporally-encoded multilayer spiking networks to perform classification. The method employs a reinforcement signal that mimics backpropagation but is far less computationally intensive. The weight update calculation at each layer requires only local data apart from this signal. We also employ a rule capable of producing specific output spike trains; by setting the target spike time equal to the actual spike time with a slight negative offset for key high-value neurons the actual spike time becomes as early as possible. In simulated MNIST handwritten digit classification, two-layer networks trained with this rule matched the performance of a comparable backpropagation based non-spiking network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2019

S4NN: temporal backpropagation for spiking neural networks with one spike per neuron

We propose a new supervised learning rule for multilayer spiking neural ...
research
02/21/2021

STDP enhances learning by backpropagation in a spiking neural network

A semi-supervised learning method for spiking neural networks is propose...
research
06/13/2017

Temporally Efficient Deep Learning with Spikes

The vast majority of natural sensory data is temporally redundant. Video...
research
11/12/2017

BP-STDP: Approximating Backpropagation using Spike Timing Dependent Plasticity

The problem of training spiking neural networks (SNNs) is a necessary pr...
research
03/10/2021

Linear Constraints Learning for Spiking Neurons

Encoding information with precise spike timings using spike-coded neuron...
research
12/04/2015

An Online Unsupervised Structural Plasticity Algorithm for Spiking Neural Networks

In this article, we propose a novel Winner-Take-All (WTA) architecture e...

Please sign up or login with your details

Forgot password? Click here to reset