Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State

09/29/2021
by   Mingqing Xiao, et al.
0

Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware. However, the supervised training of SNNs remains a hard problem due to the discontinuity of the spiking neuron model. Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks, and use surrogate derivatives or compute gradients with respect to the spiking time to deal with the problem. These approaches either accumulate approximation errors or only propagate information limitedly through existing spikes, and usually require information propagation along time steps with large memory costs and biological implausibility. In this work, we consider feedback spiking neural networks, which are more brain-like, and propose a novel training method that does not rely on the exact reverse of the forward computation. First, we show that the average firing rates of SNNs with feedback connections would gradually evolve to an equilibrium state along time, which follows a fixed-point equation. Then by viewing the forward computation of feedback SNNs as a black-box solver for this equation, and leveraging the implicit differentiation on the equation, we can compute the gradient for parameters without considering the exact forward procedure. In this way, the forward and backward procedures are decoupled and therefore the problem of non-differentiable spiking functions is avoided. We also briefly discuss the biological plausibility of implicit differentiation, which only requires computing another equilibrium. Extensive experiments on MNIST, Fashion-MNIST, N-MNIST, CIFAR-10, and CIFAR-100 demonstrate the superior performance of our method for feedback models with fewer neurons and parameters in a small number of time steps. Our code is avaiable at https://github.com/pkuxmq/IDE-FSNN.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/01/2023

SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural Networks

Spiking neural networks (SNNs) with event-based computation are promisin...
research
08/21/2023

SpikingBERT: Distilling BERT to Train Spiking Language Models Using Implicit Differentiation

Large language Models (LLMs), though growing exceedingly powerful, compr...
research
02/02/2023

Energy Efficient Training of SNN using Local Zeroth Order Method

Spiking neural networks are becoming increasingly popular for their low ...
research
09/19/2022

State-driven Implicit Modeling for Sparsity and Robustness in Neural Networks

Implicit models are a general class of learning models that forgo the hi...
research
10/09/2022

Online Training Through Time for Spiking Neural Networks

Spiking neural networks (SNNs) are promising brain-inspired energy-effic...
research
05/20/2022

EXODUS: Stable and Efficient Training of Spiking Neural Networks

Spiking Neural Networks (SNNs) are gaining significant traction in machi...
research
03/15/2023

DeblurSR: Event-Based Motion Deblurring Under the Spiking Representation

We present DeblurSR, a novel motion deblurring approach that converts a ...

Please sign up or login with your details

Forgot password? Click here to reset