Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation

by   Nitin Rathi, et al.
Purdue University
Yale University

Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes) which can potentially lead to higher energy-efficiency in neuromorphic hardware implementations. Many works have shown that an SNN for inference can be formed by copying the weights from a trained Artificial Neural Network (ANN) and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time steps to achieve competitive accuracy which diminishes the energy savings. The number of time steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron's spike time. The proposed training methodology converges in less than 20 epochs of spike-based backpropagation for most standard image classification datasets, thereby greatly reducing the training complexity compared to training SNNs from scratch. We perform experiments on CIFAR-10, CIFAR-100, and ImageNet datasets for both VGG and ResNet architectures. We achieve top-1 accuracy of 65.19 steps, which is 10X faster compared to converted SNNs with similar accuracy.


page 1

page 2

page 3

page 4


A Hybrid Learning Rule for Efficient and Rapid Inference with Spiking Neural Networks

The emerging neuromorphic computing (NC) architectures have shown compel...

Revisiting Batch Normalization for Training Low-latency Deep Spiking Neural Networks from Scratch

Spiking Neural Networks (SNNs) have recently emerged as an alternative t...

DIET-SNN: Direct Input Encoding With Leakage and Threshold Optimization in Deep Spiking Neural Networks

Bio-inspired spiking neural networks (SNNs), operating with asynchronous...

Temporal Spike Sequence Learning via Backpropagation for Deep Spiking Neural Networks

Spiking neural networks (SNNs) are well suited for spatio-temporal learn...

Whetstone: A Method for Training Deep Artificial Neural Networks for Binary Communication

This paper presents a new technique for training networks for low-precis...

Improving STDP-based Visual Feature Learning with Whitening

In recent years, spiking neural networks (SNNs) emerge as an alternative...

1 Introduction

In recent years, Spiking Neural Networks (SNNs) have shown promise towards enabling low-power machine intelligence with event-driven neuromorphic hardware. Founded on bio-plausibility, the neurons in an SNN compute and communicate information through discrete binary events (or ‘spikes’) a significant shift from the standard artificial neural networks (ANNs), which process data in a real-valued (or analog) manner. The binary all-or-nothing spike-based communication combined with sparse temporal processing precisely make SNNs a low-power alternative to conventional ANNs. With all its appeal for power efficiency, training SNNs still remains a challenge. The discontinuous and non-differentiable nature of a spiking neuron (generally, modeled as leaky-integrate-and-fire (LIF), or integrate-and-fire (IF)) poses difficulty to conduct gradient descent based backpropagation. Practically, SNNs still lag behind ANNs, in terms of performance or accuracy, in traditional learning tasks. Consequently, there has been several works over the past few years that propose different learning algorithms or learning rules for implementing deep convolutional SNNs for complex visual recognition tasks (Wu et al., 2019; Hunsberger and Eliasmith, 2015; Cao et al., 2015). Of all the techniques, conversion from ANN-to-SNN (Diehl et al., 2016, 2015; Sengupta et al., 2019; Hunsberger and Eliasmith, 2015) has yielded state-of-the-art accuracies matching deep ANN performance for Imagenet dataset on complex architectures (such as, VGG (Simonyan and Zisserman, 2014) and ResNet (He et al., 2016)

). In conversion, we train an ANN with ReLU neurons using gradient descent and then convert the ANN to an SNN with IF neurons by using suitable threshold balancing

(Sengupta et al., 2019). But, SNNs obtained through conversion incur large latency of time steps (measured as total number of time steps required to process a given input image222

SNNs process Poisson rate-coded input spike trains, wherein, each pixel in an image is converted to a Poisson-distribution based spike train with the spiking frequency proportional to the pixel value

). The term ‘time step’ defines an unit of time required to process a single input spike across all layers and represents the network latency. The large latency translates to higher energy consumption during inference, thereby, diminishing the efficiency improvements of SNNs over ANNs. To reduce the latency, spike-based backpropagation rules have been proposed that perform end-to-end gradient descent training on spike data. In spike-based backpropagation methods, the non-differentiability of the spiking neuron is handled by either approximating the spiking neuron model as continuous and differentiable (Huh and Sejnowski, 2018) or by defining a surrogate gradient as a continuous approximation of the real gradient (Wu et al., 2018; Bellec et al., 2018; Neftci et al., 2019). Spike-based SNN training reduces the overall latency by (for instance, time steps required to process an input (Lee et al., 2019)) but requires more training effort (in terms of total training iterations) than conversion approaches. A single feed-forward pass in ANN corresponds to multiple forward passes in SNN which is proportional to the number of time steps. In spike-based backpropagation, the backward pass requires the gradients to be integrated over the total number of time steps that increases the computation and memory complexity. The multiple-iteration training effort with exploding memory requirement (for backward pass computations) has limited the applicability of spike-based backpropagation methods to small datasets (like CIFAR10) on simple few-layered convolutional architectures.

In this work, we propose a hybrid training technique which combines ANN-SNN conversion and spike-based backpropagation that reduces the overall latency as well as decreases the training effort for convergence. We use ANN-SNN conversion as an initialization step followed by spike-based backpropagation incremental training (that converges to optimal accuracy with few epochs due to the precursory initialization). Essentially, our hybrid approach of taking a converted SNN and incrementally training it using backpropagation yields improved energy-efficiency as well as higher accuracy than a model trained from scratch with only conversion or only spike-based backpropagation.

In summary, this paper makes the following contributions:

  • We introduce a hybrid computationally-efficient training methodology for deep SNNs. We use the weights and firing thresholds of an SNN converted from an ANN as the initialization step for spike-based backpropagation. We then train this initialized network with spike-based backpropagation for few epochs to perform inference at a reduced latency or time steps.

  • We propose a novel spike time-dependent backpropagation (STDB, a variant of standard spike-based backpropagation) that computes surrogate gradient using neuron’s spike time. The parameter update is triggered by the occurrence of spike and the gradient is computed based on the time difference between the current time step and the most recent time step the neuron generated an output spike. This is motivated from the Hebb’s principle which states that the plasticity of a synapse is dependent on the spiking activity of the neurons connected to the synapse.

  • Our hybrid approach with the novel surrogate gradient descent allows training of large-scale SNNs without exploding memory required during spike-based backpropagation. We evaluate our hybrid approach on large SNNs (VGG, ResNet-like architectures) on Imagenet, CIFAR datasets and show near iso-accuracy compared to similar ANNs and converted SNNs at lower compute cost and energy.

Figure 1:

Surrogate gradient of the spiking neuron activation function (Eq.

11). . The gradient is computed for each neuron and defines the time difference between current simulation time and the last spike time of the neuron. For example, if a neuron spikes at its gradient will be maximum at and gradually decrease for later time steps. If the same neuron spikes later at its previous spike history will be overwritten and the gradient computation for onward will only consider the most recent spike. This avoids the overhead of storing all the spike history in memory.

2 Spike Timing Dependent Backpropagation (STDB)

In this section, we describe the spiking neuron model, derive the equations for the proposed surrogate gradient based learning, present the weight initialization method for SNN, discuss the constraints applied for ANN-SNN conversion, and summarize the overall training methodology.

2.1 Leaky Integrate and Fire (LIF) Neuron Model

The neuron model defines the dynamics of the neuron’s internal state and the trigger for it to generate a spike. The differential equation


is widely used to characterize the leaky-integrate-and-fire (LIF) neuron model where, is the internal state of the neuron referred as the membrane potential, is the resting potential, and are the input resistance and the current, respectively. The above equation is valid when the membrane potential is below the threshold value (). The neuron geneartes an output spike when and

is reduced to the reset potential. This representation is described in continuous domain and more suitable for biological simulations. We modify the equation to be evaluated in a discrete manner in the Pytorch framework

(Wu et al., 2018). The iterative model for a single post-neuron is described by


where is the membrane potential, subscript and represent the post- and pre-neuron, respectively, superscript is the time step, is a constant () responsible for the leak in membrane potential, is the weight connecting the pre- and post-neuron, is the binary output spike, and is the firing threshold potential. The right hand side of Equation 2 has three terms: the first term calculates the leak in the membrane potential from the previous time step, the second term integrates the input from the previous layer and adds it to the membrane potential, and the third term which is outside the summation reduces the membrane potential by the threshold value if a spike is generated. This is known as soft reset as the membrane potential is lowered by compared to hard reset where the membrane potential is reduced to the reset value. Soft reset enables the spiking neuron to carry forward the excess potential above the firing threshold to the following time step, thereby minimizing information loss.

Input: Trained ANN model (), SNN model (), Input ()

// Copy ann weights to snn

for l=1 to L do

end for
// Initialize threshold voltage to for l=1 to L-1 do
       for t=1 to T do
             for k=1 to l do
                   if  then
                         // Forward (Algorithm 3)
                   end if
                         // Pre-nonlinearity () if  then
                         end if
                   end if
             end for
       end for
end for
Algorithm 1 ANN-SNN conversion: initialization of weights and threshold voltages

Input: Input(), network model()

for l=1 to L do

       if  then
       end if
      else if ) then
       end if
      else if ) then
             // Generate the dropout map that will be fixed for all time steps
       end if
      else if  then
             // Reduce the width and height after average pooling layer
       end if
end for
Algorithm 2 Initialize the neuron parameters. Membrane potential (), last spike time (), dropout mask (). The initialization is performed once for every mini-batch.

2.2 Spike Timing Dependent Backpropagation (STDB) Learning Rule

The neuron dynamics (Equation 2) show that the neuron’s state at a particular time step recurrently depends on its state in previous time steps. This introduces implicit recurrent connections in the network (Neftci et al., 2019)

. Therefore, the learning rule has to perform the temporal credit assignment along with the spatial credit assignment. Credit assignment refers to the process of assigning credit or blame to the network parameters according to their contribution to the loss function. Spatial credit assignment identifies structural network parameters (like weights), whereas temporal credit assignment determines which past network activities contributed to the loss function. Gradient-descent learning solves both credit assignment problem: spatial credit assignment is performed by distributing error spatially across all layers using the chain rule of derivatives, and temporal credit assignment is done by unrolling the network in time and performing backpropagation through time (BPTT) using the same chain rule of derivatives

(Werbos and others, 1990). In BPTT, the network is unrolled for all time steps and the final output is computed as the sum of outputs from each time step. The loss function is defined on the summed output.

The dynamics of the neuron in the output layer is described by Equation (4), where the leak part is removed () and the neuron only integrates the input without firing. This eliminates the difficulty of defining the loss function on spike count (Lee et al., 2019).


The number of neurons in the output layer is the same as the number of categories in the classification task. The output of the network is passed through a softmax layer that outputs a probability distribution. The loss function is defined as the cross-entropy between the true output and the network’s predicted distribution.


is the loss function, the true output, the prediction, the total number of time steps, the accumulated membrane potential of the neuron in the output layer from all time steps, and the number of categories in the task. For deeper networks and large number of time steps the truncated version of the BPTT algorithm is used to avoid memory issues. In the truncated version the loss is computed at some time step before T based on the potential accumulated till . The loss is backpropagated to all layers and the loss gradients are computed and stored. At this point, the history of the computational graph is cleaned to save memory. The subsequent computation of loss gradients at later time steps () are summed together with the gradient at to get the final gradient. The optimizer updates the parameters at based on the sum of the gradients. Gradient descent learning has the objective of minimizing the loss function. This is achieved by backpropagating the error and updating the parameters opposite to the direction of the derivative. The derivative of the loss function w.r.t. to the membrane potential of the neuron in the final layer is described by,


Input: Mini-batch of input () - target () pairs, network model (), initial weights (), threshold voltage ()

[Algorithm 2]
// Forward propagation
for t=1 to T do

for l=1 to L-1 do
             if  then
                   // accumulate the output of previous layer in , soft reset when spike occurs

// generate the output () if exceeds

// store the latest spike times for each neuron
             end if
            else if  then
             end if
            else if  then
             end if
       end for
end for
// Backward Propagation
Compute from the cross-entropy loss function using BPTT
for t=T to 1 do
       for l=L-1 to 1 do
             Compute based on if is linear, conv, pooling, etc.
       end for
end for
Algorithm 3 Training an SNN with surrogate gradient computed with spike timing. The network is composed of layers. The training proceeds with mini-batch size ()

To compute the gradient at current time step, the membrane potential at last time step ( in Equation 4) is considered as an input quantity. Therefore, gradient descent updates the network parameters of the output layer as,


where is the learning rate, and represents the copy of the weight used for computation at time step . In the output layer the neurons do not generate a spike, and hence, the issue of non-differentiability is not encountered. The update of the hidden layer parameters is described by,


where is the thresholding function (Equation 3) whose derivative w.r.t to is zero everywhere and not defined at the time of spike. The challenge of discontinuous spiking nonlinearity is resolved by introducing a surrogate gradient which is the continuous approximation of the real gradient.


where and are constants, is the time difference between the current time step () and the last time step the post-neuron generated a spike (). It is an integer value whose range is from zero to the total number of time steps ().


The values of and are selected depending on the value of . If is large is lowered to reduce the exponential decay so a spike can contribute towards gradients for later time steps. The value of is also reduced for large because the gradient can propagate through many time steps. The gradient is summed at each time step and thus a large may lead to exploding gradient. The surrogate gradient can be pre-computed for all values of and stored in a look-up table for faster computation. The parameter updates are triggered by the spiking activity but the error gradients are still non-zero for time steps following the spike time. This enables the algorithm to avoid the ‘dead neuron’ problem, where no learning happens when there is no spike. Fig. 1 shows the activation gradient for different values of , the gradient decreases exponentially for neurons that have not been active for a long time. In Hebbian models of biological learning, the parameter update is activity dependent. This is experimentally observed in spike-timing-dependent plasticity (STDP) learning rule which modulates the weights for pair of neurons that spike within a time window (Song et al., 2000).

Figure 2: Residual architecture for SNN

3 SNN Weight Initialization

A prevalent method of constructing SNNs for inference is ANN-SNN conversion (Diehl et al., 2015; Sengupta et al., 2019). Since the network is trained with analog activations it does not suffer from the non-differentiablity issue and can leverage the training techniques of ANNs. The conversion process has a major drawback: it suffers from long inference latency ( time steps) as mentioned in Section 1. As there is no provision to optimize the parameters after conversion based on spiking activity, the network can not leverage the temporal information of the spikes. In this work, we propose to use the conversion process as an initialization technique for STDB. The converted weights and thresholds serve as a good initialization for the optimizer and the STDB learning rule is applied for temporal and spatial credit assignment.

Algorithm 1 explains the ANN-SNN conversion process. The threshold voltages in SNN needs to be adjusted based on the ANN weights. Sengupta et al. (2019) showed two ways to achieve this: weight-normalization and threshold-balancing. In weight-normalization the weights are scaled by a normalization factor and threshold is set to 1, whereas in threshold-balancing the weights are unchanged and the threshold is set to the normalization factor. Both have a similar effect and either can be used to set the threshold. We employ the threshold-balancing method and the normalization factor is calculated as the maximum output of the corresponding convolution/linear layer in SNN. The maximum is calculated over a mini-batch of input for all time steps.

There are several constraints imposed on training the ANN for the conversion process (Sengupta et al., 2019; Diehl et al., 2015)

. The neurons are trained without the bias term because the bias term in SNN has an indirect effect on the threshold voltage which increases the difficulty of threshold balancing and the process becomes more prone to conversion loss. The absence of bias term eliminates the use of Batch Normalization

(Ioffe and Szegedy, 2015) as a regularizer in ANN since it biases the input of each layer to have zero mean. As an alternative, Dropout (Srivastava et al., 2014) is used as a regularizer for both ANN and SNN training. The implementation of Dropout in SNN is further discussed in Section 5

. The pooling operation is widely used in ANN to reduce the convolution map size. There are two popular variants: max pooling and average pooling

(Boureau et al., 2010). Max (Average) pooling outputs the maximum (average) value in the kernel space of the neuron’s activations. In SNN, the activations are binary and performing max pooling will result in significant information loss for the next layer, so we adopt the average pooling for both ANN and SNN (Diehl et al., 2015).

4 Network Architectures

In this section, we describe the changes made to the VGG (Simonyan and Zisserman, 2014) and residual architecture (He et al., 2016) for hybrid learning and discuss the process of threshold computation for both the architectures.

4.1 VGG Architecture

The threshold balancing is performed for all layers except the input and output layer in a VGG architecture. For every hidden convolution/linear layer the maximum input333input to a neuron is the weighted sum of spkies from pre-neurons to the neuron is computed over all time steps and set as threshold for that layer. The threshold assignment is done sequentially as described in Algorithm 1. The threshold computation for all layers can not be performed in parallel (in one forward pass) because in the forward method (Algorithm 3) we need the threshold at each time step to decide if the neuron should spike or not.

4.2 Residual Architecture

Residual architectures introduce shortcut connections between layers that are not next to each other. In order to minimize the ANN-SNN conversion loss various considerations were made by Sengupta et al. (2019). The original residual architecture proposed by He et al. (2016) uses an initial convolution layer with wide kernel (

, stride

). For conversion, this is replaced by a pre-processing block consisting of a series of three convolution layer (, stride ) with dropout layer in between (Fig. 2). The threshold balancing mechanism is applied to only these three layers and the layers in the basic block have unity threshold.

Architecture ANN ANN-SNN Conversion () ANN-SNN Conversion (reduced time steps) Hybrid Training (ANN-SNN Conversion + STDB)
VGG5 () ()
VGG9 () ()
VGG16 () ()
ResNet8 () ()
ResNet20 () ()
VGG11 () ()
ResNet34 () ()
VGG16 () ()
Table 1: Classification results (Top-1) for CIFAR10, CIFAR100 and ImageNet data sets. Column-1 shows the network architecture. Column-2 shows the ANN accuracy when trained under the constraints as described in Section 3. Column-3 shows the SNN accuracy for when converted from a ANN with threshold balancing. Column-4 shows the performance of the same converted SNN with lower time steps and adjusted thresholds. Column-5 shows the performance after training the Column-4 network with STDB for less than 20 epochs.

5 Overall Training Algorithm

Algorithm 1 defines the process to initialize the parameters (weights, thresholds) of SNN based on ANN-SNN conversion. Algorithm 2 and 3 show the mechanism of training the SNN with STDB. Algorithm 2 initializes the neuron parameters for every mini-batch, whereas Algorithm 3 performs the forward and backward propagation and computes the credit assignment. The threshold voltage for all neurons in a layer is same and is not altered in the training process. For each dropout layer we initialize a mask () for every mini-batch of inputs. The function of dropout is to randomly drop a certain number of inputs in order to avoid overfitting. In case of SNN, inputs are represented as a spike train and we want to keep the dropout units same for the entire duration of the input. Thus, a random mask () is initialized (Algorithm 2) for every mini-batch and the input is element-wise multiplied with the mask to generate the output of the dropout layer (Lee et al., 2019). The Poisson generator function outputs a Poisson spike train with rate proportional to the pixel value in the input. A random number is generated at every time step for each pixel in the input image. The random number is compared with the normalized pixel value and if the random number is less than the pixel value an output spike is generated. This results in a Poisson spike train with rate equivalent to the pixel value if averaged over a long time. The weighted sum of the input is accumulated in the membrane potential of the first convolution layer. The STDB function compares the membrane potential and the threshold of that layer to generate an output spike. The neurons that output a spike their corresponding entry in is updated with current time step (). The last spike time is initialized with a large negative number (Algorithm 2) to denote that at the beginning the last spike happened at negative infinity time. This is repeated for all layers until the last layer. For last layer the inputs are accumulated over all time steps and passed through a softmax layer to compute the multi-class probability. The cross-entropy loss function is defined on the output of the softmax and the weights are updated by performing the temporal and spatial credit assignment according to the STDB rule.

6 Experiments

We tested the proposed training mechanism on image classification tasks from CIFAR (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009) datasets. The results are summarized in Table 1. CIFAR10: The dataset consists of labeled images of categories divided into training () and testing () set. The images are of size with RGB channels.

CIFAR100: The dataset is similar to CIFAR10 except that it has 100 categories.

ImageNet: The dataset comprises of labeled high-resolution million training images and validation images with categories.

7 Energy-Delay Product Analysis of SNNs

A single spike in an SNN consumes a constant amount of energy (Cao et al., 2015). The first order analysis of energy-delay product of an SNN is dependent on the number of spikes and the total number of time steps. Fig. 3 shows the average number of spikes in each layer when evaluated for samples from CIFAR10 testset for VGG16 architecture. The average is computed by summing all the spikes in a layer over time steps and dividing by the number of neurons in that layer. For example, the average number of spikes in the layer is for both the networks, which implies that over a time step period each neuron in that layer spikes times on average over all input samples. Higher spiking activity corresponds to lower energy-efficiency. The average number of spikes is compared for a converted SNN and SNN trained with conversion-and-STDB. The SNN trained with conversion-and-STDB has less number of average spikes over all layers under iso conditions (time steps, threshold voltages, inputs, etc.) and achieves higher accuracy compared to the converted SNN. The converted SNNs when simulated for larger time steps further degrade the energy-delay product with minimal increase in accuracy (Sengupta et al., 2019).

Figure 3: Average number of spikes for each layer in a VGG16 architecture for purely converted SNN and SNN trained with hybrid technique. The converted SNN and SNN trained with hybrid technique achieve an accuracy of 89.20% and 91.87%, respectively, for the randomly selected 1500 samples from the test set. Both the networks were inferred for 100 time steps and ‘v’ represents the threshold voltage for each layer obtained during the conversion process (Algorithm 1).

8 Related Work

Bohte et al. (2000) proposed a method to directly train on SNN by keeping track of the membrane potential of spiking neurons only at spike times and backpropagating the error at spike times based on only the membrane potential. This method is not suitable for networks with sparse activity due to the ‘dead neuron’ problem: no learning happens when the neurons do not spike. In our work, we need one spike for the learning to start but gradient contribution continues in later time steps as shown in Fig. 1. Zenke and Ganguli (2018) derived a surrogate gradient based method on the membrane potential of a spiking neuron at a single time step only. The error was backpropagated at only one time step and only the input at that time step contributed to the gradient. This method neglects the effect of earlier spike inputs. In our approach, the error is backpropagated for every time step and the weight update is performed on the gradients summed over all time steps. Shrestha and Orchard (2018) proposed a gradient function similar to the one proposed in this work. They used the difference between the membrane potential and the threshold to compute the gradient compared to the difference in spike timing used in this work. The membrane potential is a continuous value whereas the spike time is an integer value bounded by the number of time steps. Therefore, gradients that depend on spike time can be pre-computed and stored in a look-up table for faster computation. They evaluated their approach on shallow architectures with two convolution layer for MNIST dataset. In this work, we trained deep SNNs with multiple stacked layers for complex calssification tasks. Wu et al. (2018) performed backpropagation through time on SNN with a surrogate gradient defined on the membrane potential. The surrogate gradient was defined as piece-wise linear or exponential function of the membrane potential. The other surrogate gradients proposed in the literature are all computed on the membrane potential (Neftci et al., 2019). Lee et al. (2019) approximated the neuron output as continuous low-pass filtered spike train. They used this approximated continuous value to perform backpropagation. Most of the works in the literature on direct training of SNN or conversion based methods have been evaluated on shallow architectures for simple classification problems. In Table 2 we compare our model with the models that reported accuracy on CIFAR10 and ImageNet dataset. Wu et al. (2019) achieved convergence in time steps by using a dedicated encoding layer to capture the input precision. It is beyond the scope of this work to compute the hardware and energy implications of such encoding layer. Our model performs better than all other models at far fewer number of time steps.

Model Dataset Training Method Architecture Accuracy Time-steps
Hunsberger and Eliasmith (2015) CIFAR10 ANN-SNN Conversion 2Conv, 2Linear
Cao et al. (2015) CIFAR10 ANN-SNN Conversion 3Conv, 2Linear
Sengupta et al. (2019) CIFAR10 ANN-SNN Conversion VGG16
Lee et al. (2019) CIFAR10 Spiking BP VGG9
Wu et al. (2019) CIFAR10 Surrogate Gradient 5Conv, 2Linear
This work CIFAR10 Hybrid Training VGG16    
Sengupta et al. (2019) ImageNet ANN-SNN Conversion VGG16
This work ImageNet Hybrid Training VGG16
Table 2: Comparion of our work with other SNN models on CIFAR10 and ImageNet datasets

9 Conclusions

The direct training of SNN with backpropagation is computationally expensive and slow, whereas ANN-SNN conversion suffers from high latency. To address this issue we proposed a hybrid training technique for deep SNNs. We took an SNN converted from ANN and used its weights and thresholds as initialization for spike-based backpropagation of SNN. We then performed spike-based backpropagation on this initialized network to obtain an SNN that can perform with fewer number of time steps. The number of epochs required to train SNN was also reduced by having a good initial starting point. The resultant trained SNN had higher accuracy and lower number of spikes/inference compared to purely converted SNNs at reduced number of time steps. The backpropagation through time was performed with surrogate gradient defined using neuron’s spike time that captured the temporal information and helped in reducing the number of time steps. We tested our algorithm on CIFAR and ImageNet datasets and achieved state-of-the-art performance with fewer number of time steps.


This work was supported in part by the National Science Foundation, in part by Vannevar Bush Faculty Fellowship, and in part by C-BRIC, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA.


  • G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass (2018) Long short-term memory and learning-to-learn in networks of spiking neurons. In Advances in Neural Information Processing Systems, pp. 787–797. Cited by: Appendix A, §1.
  • S. M. Bohte, J. N. Kok, and J. A. La Poutré (2000) SpikeProp: backpropagation for networks of spiking neurons.. In ESANN, pp. 419–424. Cited by: §8.
  • Y. Boureau, J. Ponce, and Y. LeCun (2010) A theoretical analysis of feature pooling in visual recognition. In

    Proceedings of the 27th international conference on machine learning (ICML-10)

    pp. 111–118. Cited by: §3.
  • Y. Cao, Y. Chen, and D. Khosla (2015)

    Spiking deep convolutional neural networks for energy-efficient object recognition


    International Journal of Computer Vision

    113 (1), pp. 54–66.
    Cited by: §1, §7, Table 2.
  • J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei (2009) ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, Cited by: §6.
  • P. U. Diehl, D. Neil, J. Binas, M. Cook, S. Liu, and M. Pfeiffer (2015)

    Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing

    In 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. Cited by: §1, §3, §3.
  • P. U. Diehl, G. Zarrella, A. Cassidy, B. U. Pedroni, and E. Neftci (2016)

    Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware

    In 2016 IEEE International Conference on Rebooting Computing (ICRC), pp. 1–8. Cited by: §1.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    pp. 770–778. Cited by: §1, §4.2, §4.
  • D. Huh and T. J. Sejnowski (2018) Gradient descent for spiking neural networks. In Advances in Neural Information Processing Systems, pp. 1433–1443. Cited by: §1.
  • E. Hunsberger and C. Eliasmith (2015) Spiking deep networks with lif neurons. arXiv preprint arXiv:1510.08829. Cited by: §1, Table 2.
  • S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §3.
  • A. Krizhevsky, G. Hinton, et al. (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: §6.
  • C. Lee, S. S. Sarwar, and K. Roy (2019) Enabling spike-based backpropagation in state-of-the-art deep neural network architectures. arXiv preprint arXiv:1903.06379. Cited by: §1, §2.2, §5, Table 2, §8.
  • E. O. Neftci, H. Mostafa, and F. Zenke (2019) Surrogate gradient learning in spiking neural networks. arXiv preprint arXiv:1901.09948. Cited by: §1, §2.2, §8.
  • A. Sengupta, Y. Ye, R. Wang, C. Liu, and K. Roy (2019) Going deeper in spiking neural networks: vgg and residual architectures. Frontiers in neuroscience 13. Cited by: §1, §3, §3, §3, §4.2, §7, Table 2.
  • S. B. Shrestha and G. Orchard (2018) SLAYER: spike layer error reassignment in time. In Advances in Neural Information Processing Systems, pp. 1412–1421. Cited by: Appendix A, §8.
  • K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §1, §4.
  • S. Song, K. D. Miller, and L. F. Abbott (2000) Competitive hebbian learning through spike-timing-dependent synaptic plasticity. Nature neuroscience 3 (9), pp. 919. Cited by: §2.2.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: §3.
  • P. J. Werbos et al. (1990) Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 78 (10), pp. 1550–1560. Cited by: §2.2.
  • Y. Wu, L. Deng, G. Li, J. Zhu, and L. Shi (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in neuroscience 12. Cited by: Appendix A, §1, §2.1, §8.
  • Y. Wu, L. Deng, G. Li, J. Zhu, Y. Xie, and L. Shi (2019) Direct training for spiking neural networks: faster, larger, better. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    Vol. 33, pp. 1311–1318. Cited by: §1, Table 2, §8.
  • F. Zenke and S. Ganguli (2018)

    Superspike: supervised learning in multilayer spiking neural networks

    Neural computation 30 (6), pp. 1514–1541. Cited by: Appendix A, §8.

Appendix A Comparisons with other Surrogate Gradients

Figure 4: Linear and Exponential approximation of the gradient of the spiking neuron (step function).

The transfer function of the spiking neuron is a step function and its derivative is zero everywhere except at the time of spike where it is not defined. In order to perform backpropagation with spiking neuron several approximations are proposed for the gradient function (Bellec et al., 2018; Zenke and Ganguli, 2018; Shrestha and Orchard, 2018; Wu et al., 2018). These approximations are either a linear or exponential function of , where is the membrane potential and the threshold voltage (Fig. 4). These approximations are referred as surrogate gradient or pseudo-derivative. In this work, we proposed an approximation that is computed using the spike timing of the neuron (Equation 11). We compare our proposed approximation with the following surrogate gradients:


where is the binary output of the neuron, is the membrane potential, is the threshold potential, and are constants. Equation 13 and Equation 14 represent the linear and exponential approximation of the gradient, respectively. We employed these approximations in the hybrid training for a VGG9 network for CIFAR10 dataset. All the approximations (Equation 11, 13, and 14) produced similar results in terms of accuracy and number of epochs for convergence. This shows that the term (Equation 11) is a good replacement for (Equation 14). The behaviour of and is similar, i.e., it is small closer to the time of spike and increases as we move away from the spiking event. The advantage of using is that its domain is bounded by the total number of time steps (Equation 12). Hence, all possible values of gradients can be pre-computed and stored in a table for faster access during training. This is not possible for membrane potential because it is a real value computed based on the stochastic inputs and previous state of the neuron which is not known before hand. The exact benefit in energy from the pre-computation is dependent on the overall system architecture and evaluating it is beyond the scope of this paper.

Appendix B Comparisons of Simulation Time and Memory Requirements

The simulation time and memory requirements for ANN and SNN are very different. SNN requires much more resources to iterate over multiple time steps and store the membrane potential for each neuron. Fig. 5 shows the training and inference time and memory requirements for ANN, SNN trained with backpropagation from scratch, and SNN trained with the proposed hybrid technique. The performance was evaluated for VGG16 architecture trained for CIFAR10 dataset. SNN trained from scratch and SNN trained with hybrid conversion-and-STDB are evaluated for 100 time steps. One epoch of ANN training (inference) takes () minutes and () GB of GPU memory. On the other hand, one epoch of SNN training (inference) takes () minutes and () GB of GPU memory for same hardware and mini-batch size. ANN and SNN trained from scratch reached convergence after 250 epochs. The hybrid technique requires epochs of ANN training and epochs of spike-based backpropagation. The hybrid training technique is one order of magnitude faster than training SNN from scratch. The memory requirement for hybrid technique is same as SNN as we need to perform fine-tuning with spike-based backpropagation.

Figure 5: Training and Inference time and memory for ANN, SNN trained with backpropagation from scratch, and SNN trained with hybrid technique. All values are normalized based on ANN values. The y-axis is in log scale. The performance was evaluated on one Nvidia GeForce RTX 2080 Ti TU102 GPU with GB of memory. All the networks were trained for VGG16 architecture, CIFAR10 dataset, 100 time steps, and mini-batch size of . ANN and SNN require epochs of training from scratch, hybrid conversion-and-STDB based training requires epochs of ANN training followed by epochs of spike-based backpropagation.