Unifying Activation- and Timing-based Learning Rules for Spiking Neural Networks

06/04/2020 ∙ by Jinseok Kim, et al. ∙ POSTECH 0

For the gradient computation across the time domain in Spiking Neural Networks (SNNs) training, two different approaches have been independently studied. The first is to compute the gradients with respect to the change in spike activation (activation-based methods), and the second is to compute the gradients with respect to the change in spike timing (timing-based methods). In this work, we present a comparative study of the two methods and propose a new supervised learning method that combines them. The proposed method utilizes each individual spike more effectively by shifting spike timings as in the timing-based methods as wells as generating and removing spikes as in the activation-based methods. Experimental results showed that the proposed method achieves higher performance in terms of both accuracy and efficiency than the previous approaches.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Spiking neural networks (SNNs) have been studied not only for their biological plausibility but also for computational efficiency that stems from information processing with binary spikes (Maass, 1997)

. One of the unique characteristics of SNNs is that the states of the neurons at different time steps are closely related to each other. This may resemble the temporal dependency in recurrent neural networks (RNNs), but in SNNs direct influences between neurons are only through the binary spikes. Since the true derivative of the binary activation function, or thresholding function, is zero almost everywhere, SNNs have an additional challenge in precise gradient computation unless the binary activation function is replaced by an alternative as in

(Huh and Sejnowski, 2018).

Due to the difficulty of training SNNs, in some recent studies, parameters trained in non-spiking NNs were employed in SNNs. However, this approach is only feasible by using the similarity between rate-coded SNNs and non-spiking NNs (Diehl et al., 2015; Hunsberger and Eliasmith, 2015) or by abandoning several features of spiking neurons to maximize the similarity between SNNs and non-spiking NNs (Park et al., 2020; Rueckauer and Liu, 2018; Zhang et al., 2019). The unique characteristics of SNNs that enable efficient information processing can only be utilized with dedicated learning methods for SNNs. In this context, several studies have reported promising results with the gradient-based supervised learning methods that takes account of those characteristics (Comsa et al., 2019; Mostafa, 2017; Shrestha and Orchard, 2018; Wu et al., 2018; Zenke and Ganguli, 2018).

Previous works on gradient-based supervised learning for SNNs can be classified into two categories. The methods in the first category work around the non-differentiability of the spiking function with the surrogate derivative

(Neftci et al., 2019) and compute the gradients with respect to the spike activation (Shrestha and Orchard, 2018; Wu et al., 2018; Zenke and Ganguli, 2018). The methods in the second category focus on the timings of existing spikes and computes the gradients with respect to the spike timing (Comsa et al., 2019; Mostafa, 2017; Bohte et al., 2002). Let us call those methods as the activation-based methods and the timing-based methods, respectively. Until now, the two approaches have been thought irrelevant to each other and studied independently.

The problem with previous works is that both approaches have limitations in computing accurate gradients, which become more problematic when the spike density is low. The computational cost of the SNN is known to be proportional to the number of spikes, or the firing rates (Rueckauer and Liu, 2018; Akopyan et al., 2015; Davies et al., 2018). To make the best use of the computational power of SNNs and use them more efficiently than non-spiking counterparts, it is important to reduce the required number of spikes for inference. If there are only a few spikes in the network, the network becomes more sensitive to the change in the state of each individual spike such as the generation of a new spike, the removal of an existing spike, or the shift of an existing spike. Training SNNs with fewer spikes requires the learning method to be aware of those changes through gradient computation.

In this work, we investigated the relationship between the activation-based methods and the timing-based methods for supervised learning in SNNs. We observed that the two approaches are complementary when considering the change in the state of individual spikes. Then we devised a new learning method called activation- and timing-based learning rule (ANTLR) that enables more precise gradient computation by combining the two methods. In experiments with random spike-train matching task and widely used benchmarks (MNIST and N-MNIST), our method achieved the higher accuracy than that of previous works when the networks are forced to use fewer spikes in training.

2 Backgrounds

2.1 Neuron model

We used a discrete-time version of a leaky integrate-and-fire (LIF) neuron with the current-based synapse model. The neuronal states of postsynaptic neuron

are formulated as

(1)
(2)
(3)

where is a membrane potential, is a synaptic current, is a binary spike activation. is a synaptic weight from presynaptic neuron . is a trainable bias parameter. and are the spiking function and the threshold, respectively. and are the decay coefficients for the potential and the current. , , and are the scale coefficients. We call this type of description as the RNN-like description since the temporal dependency between variables resembles that in RNNs (Neftci et al., 2019) (Figure (a)a). The term was introduced in and to reset both the potential and the synaptic current. Note that this model can express various types of commonly used neuron models by changing the decay coefficients (Figure A1 in Appendix A).

(a)
(b)
Figure 1: Computational graphs representing (a) the RNN-like description and (b) the SRM-based description of our SNN model. Black solid arrows represent accumulation and decaying. Black dashed arrows represent synaptic integration, red solid arrows represent the spiking function, and red dashed arrows represent reset paths.

The same neuron model can also be formulated using the spike response kernel as

(4)
(5)

where is a spike timing of neuron , , and is the last spike timing of neuron before . We call this type of description as the SRM-based description as it is in the form of the Spike Response Model (SRM) (Gerstner, 1995) (Figure (b)b). Detailed explanations on the equivalence of the two descriptions are given in Appendix B.

2.2 Existing gradient computation methods

2.2.1 Activation-based methods

To back-propagate the gradients to the lower layers, the activation-based methods (Huh and Sejnowski, 2018; Shrestha and Orchard, 2018; Wu et al., 2018; Zenke and Ganguli, 2018) approximate the derivative of the spiking function which is zero almost everywhere. It is similar to what non-spiking NNs do to the quantized activation functions such as the thresholding function for Binary Neural Networks (Hubara et al., 2016). The approximated derivative is called the surrogate derivative (Neftci et al., 2019), and we will denote this as .

(a) Activation-based, RNN-like
(b) Activation-based, SRM-based
(c) Activation-based, RNN-like w/o reset paths
(d) Timing-based, SRM-based
Figure 2: Various types of back-propagation derived from different descriptions
RNN-like method

Since the forward pass of the RNN-like description of the neuron model resembles that of non-spiking RNNs (Figure (a)a), back-propagation can also be treated like the Back-Propagation-Through-Time (BPTT) (Werbos, 1990) (Figure (a)a, the equations are in Appendix C) (Huh and Sejnowski, 2018; Wu et al., 2018).

SRM-based method

However, from the SRM-based description of the same model (Figure (b)b), back-propagation is derived in a slightly different way using the kernel function between each layer (Figure (b)b) (Shrestha and Orchard, 2018). From Equation 4, we can obtain the gradient of the membrane potential of the postsynaptic neuron at arbitrary time step with respect to the spike activation of the presynaptic neuron at time step as

(6)

Interestingly, we found that the SRM-based method (Figure (b)b) is functionally equivalent to the RNN-like method except that the diagonal reset paths are removed (Figure (c)c, See Appendix D for detailed explanation). In fact, neglecting the reset paths in back-propagation can improve the learning result as it can avoid the accumulation of the approximation errors. Via the reset paths (red dashed arrows in Figure (a)a), the same gradient value recursively passes through the surrogate derivative (red solid arrows in Figure (a)a), as many times as the number of time steps. Even though the amount of the approximation error from a single surrogate derivative is tolerable, the accumulated error can be orders of magnitude larger because the number of time steps is usually larger than hundreds. We experimentally observed that propagating gradients via the reset paths significantly degrades training results regardless of the task and network settings. In this regard, we used the SRM-based method instead of the RNN-like method to represent the activation methods throughout this paper.

2.2.2 Timing-based methods

The timing-based methods (Comsa et al., 2019; Mostafa, 2017; Bohte et al., 2002) exploit the differentiable relationship between the spike timing and the membrane potential at the spike timing . The local linearity assumption of the membrane potential around leads to where is the time derivative of the membrane potential at time . In this work, we used approximated time derivative for discrete time domain as . Note that computing the gradient of a spike timing does not require the derivative of the spiking function .

From Equation 4 of the SRM-based description, we can obtain the gradient of the membrane potential of the postsynaptic neuron at arbitrary time step with respect to the spike timing of the presynaptic neuron as

(7)

where is the approximated time derivative of SRM kernel in discrete time domain. Figure (d)d depicts how the timing-based method propagates the gradients. Only in the time steps with spikes, is propagated to and then is propagated to the lower layer with Equation 7.

3 Activation- and Timing-based Learning Rule (ANTLR)

3.1 Complementary nature of activation-based methods and timing-based methods

Calculating the gradients is to estimate how much the network output varies when the parameters or the variables are changed. One of the main findings in our study is that the activation-based and timing-based methods are complementary in the way they consider the change in the network.

The change in SNNs can be represented by the generation, the removal, and the shift of spikes. The generation or the removal of a spike is expressed as the change of the spike activation (01 or 10). The activation-based methods, which calculate the gradient with respect to the spike activations , then naturally can consider the generations and the removals. On the other hand, the shift of a spike is expressed as the change of the spike timing . The timing-based methods, which calculate the gradient with respect to the spike timings , easily take account of the spike shifts.

(a)
(b)
Figure 3: The spike timing shift (\⃝raisebox{-0.9pt}{1}\⃝raisebox{-0.9pt}{2}) can be described using the change in (a) the spike timing or (b) the spike activation. The spike activation change in the earlier time step causes the activation change in the later time step via the reset path (red arrow).

The problem in the activation-based methods is that they cannot deal with the spike shifts accurately. In terms of the spike activations, the spike shift is interpreted as a pair of opposite spike activation changes with causal relationship through the reset path (Figure 3). Because of the major role of the reset path in the spike shift, gradient computation methods with the spike activations cannot consider the shift without precisely computing the gradients related to the reset paths. Unfortunately, as explained in Section 2.2.1, the SRM-based activation-based method does not have a reset path so that it is not possible to consider the spike shift at all. The RNN-like activation-based method has the reset paths, but it suffers from accuracy loss due to the accumulated errors in the reset path. Although the shift of an individual spike does not make a huge difference to the whole network in the situation where many spikes are generated and removed, it becomes important when there are not many spikes in the network.

The problem in the timing-based methods is that the generation and the removal of spikes cannot be described with the spike timings. The timing-based methods also cannot anticipate the spike number change in the network, which happens by the generation or the removal of spikes. Even though the generation and the removal happen less often compared to the spike shift when the parameters are updated by small amounts, their influences to the network are usually more significant.

3.2 Combining activation-based gradients and timing-based gradients

To overcome the limitations in previous works, we propose a new method of back-propagation for SNNs, called an activation- and timing-based learning rule (ANTLR), that combines the activation-based gradients and the timing-based gradients together. The activation-based methods and the timing-based methods back-propagate the gradient through different intermediate gradients, which are and , respectively. For this reason, the two approaches have been treated as completely different approaches. However, there is another intermediate gradient calculated in both approaches. in the activation-based methods is propagated from and carries information about the generation and the removal of the spikes whereas in the timing-based methods is propagated from and carries information about the spike shift.

The main idea of ANTLR is to (1) combine the activation-based gradients and the timing-based gradients by taking weighted sum and (2) propagate the combined gradients (Figure 4). In ANTLR, the gradients are back-propagated to the lower layers as

(8)
(9)
(10)

where last two terms in Equation 9 are calculated using the activation-based method as in Section 2.2.1 and last two terms in Equation 10 are calculated using the timing-based method as in Section 2.2.2

. To train SNNs using ANTLR and other methods, we implemented CUDA-compatible gradient computation functions in PyTorch

(Paszke et al., 2019)111The source code will be released later. (implementation details are described in Appendix E).

Figure 4: Back-propagation in both the activation-based method and the timing-based method can be described using of neurons at different time steps and the way they are propagated (black arrows). ANTLR combines the two methods (red arrows and blue arrows) by weighted summation at each stage.

Note that ANTLR with the setting , is equivalent to the activation-based method whereas ANTLR with , is equivalent to the timing-based method. Therefore, ANTLR can also be regarded as a unified framework that covers the two distinct approaches. In this work, we focused on showing the fundamental benefits of combining them and used the simplest setting , . Proper values of , may depend on the situations, but further studies are needed to precisely understand their influences.

3.3 Loss functions

Type Count Spike-train Latency
Loss ()
0
0
Compatible with Activation, ANTLR Activation, Timing, ANTLR Timing, ANTLR
  • represents an index of the output neurons, , , represents an exponential kernel, is a scaling factor, represents a target spike number, and

    represents a target probability

Table 1:

Three different types of loss functions and corresponding activation-based gradient

and timing-based gradient

We used three types of widely used loss functions which are count loss, spike-train loss, and latency loss (Table 1). Count loss is defined as a sum of squared error between the output and target number of spikes of each output neuron. Spike-train loss is a sum of squared error between the filtered output spike-train and the filtered target spike-train. Latency loss is defined as the cross-entropy of the softmax of negatively weighted first spike timings of output neurons. Note that the count loss cannot provide the gradient with respect to the spike timing whereas the latency loss cannot provide the gradient with respect to the spike activation. It makes those loss types inapplicable to certain types of learning methods. We want to emphasize that ANTLR can use all the loss types.

3.4 Estimated loss landscape

(a)
(b)
(c)
(d)
(e)
Figure 5: (a) True loss landscape, estimated loss landscapes using (b) the activation-based method, (c) the timing-based method and (d) ANTLR with , and (e) the color scheme used for highlighting. and represent two dimensions along which we perturbed the network parameters.

We conducted a simple experiment to visualize the gradients computed by each method. A fully-connected network with two hidden layers of 10-50-50-1 neurons was trained to minimize the spike-train loss with three random input spikes for each input neuron and a single target spike for the target neuron. After reaching to the global optimum of zero loss, we perturbed all trainable parameters (weights and biases) along first two principal components of the gradient vectors used in training and measured the true loss (Figure 

(a)a). The lowest point at the center (dark blue region) represents the global minimum, and subtle loss increase around the center shows the effect of the spike timing shift. Dramatic increase of the loss depicted in the right corner shows the loss increase from the spike number change. To emphasize the subtle height difference due to the spike timing shift, we highlighted the area adjacent to the global optimum where the number of spikes does not change using the color scheme in Figure (e)e.

Different learning methods provide different gradient values based on their distinct approaches. Using each method’s gradient vector at each parameter point, we visualized the estimated loss landscape using the surface reconstruction method Harker and O’Leary (2008); Jordan (2017) (Figure (b)b to (d)d). The results of the activation-based method (Figure (b)b) well demonstrated the steep loss change due to the spike number change, whereas the timing-based method (Figure (c)c) could not take account of it. On the other hand, the timing-based method captured the subtle loss change due to the spike timing shift while the activation-based method showed almost flat loss landscape in the region without the spike number change. By combining both methods, ANTLR was able to capture those features at the same time (Figure (d)d).

4 Experimental results

We evaluated practical advantages of ANTLR compared to other methods using 3 different tasks: (1) random spike-train matching, (2) latency-coded MNIST, and (3) N-MNIST. Hyper-parameters for training were grid-searched for each task (detailed experimental settings are in Appendix F). For the timing-based method, we added a no-spike penalty that increases the incoming synaptic weights of the neurons without any spike as in (Comsa et al., 2019).

4.1 Random spike-train matching

(a)
(b)
Figure 6: Averaged training loss over 100 trials of random spike-train matching task with three input spikes and (a) a single target spike and (b) three target spikes. Note that the y axis is in logarithmic scale.

Using the same experiment setup as in Section 3.4 except the varying number of the target spikes and the different network size of 10-50-50-5, we measured the training loss of the networks trained by different learning methods (Figure 6). This task was used to see the basic performance of the learning methods in a situation where each spike significantly affects the training results. During 50000 training iterations, both the activation-based method and ANTLR showed noticeable decrease in loss whereas the timing-based method failed to train the network as it cannot handle the spike number change. ANTLR outperformed other methods with much faster convergence and lower loss.

4.2 Latency-coded MNIST

In this experiment, we applied the latency coding to the input data of MNIST dataset (LeCun et al., 1998) as in (Comsa et al., 2019; Mostafa, 2017). The larger intensity value of each pixel was represented by the earlier spike timing of corresponding input neuron. We used this conversion to reduce the total number of spikes and make the situation where each learning method should take account of the precise spike timing for a better result.

The timing-based method and ANTLR used the latency loss, and the activation-based method used the count loss with the target spike number of 1/0 for correct/wrong labels. We also added a variant of the count loss to the total loss of ANTLR to prevent the target output neuron from being silent. Note that the target spike number for the activation-based method is much smaller than that from previous works since we applied the latency coding to the input to reduce the number of input spikes. The output class can either be determined using the output neuron emitting the most spikes (most-spike decision scheme) or the neuron emitting the earliest spike (earliest-spike decision scheme). The timing-based method and ANTLR used the earliest-spike decision scheme whereas the activation-based method used the most-spike decision scheme considering the loss types they used.

(a)
(b)
Figure 7:

Test accuracy and the required number of hidden and output spikes to classify a single sample on (a) latency-coded MNIST task and (b) latency-coded MNIST task with the single-spike restriction. The values in the legend represent the mean and standard deviation of 16 trials.

We trained the network with a size of 784-800-10 and 100 time steps using a mini-batch size of 16 and the split of 50000/10000 images for training/validation dataset. The results of test accuracy and the number of spikes used for each sample are shown in Figure (a)a. The number of spikes used to finish a task was usually not presented in previous works, but we included it to demonstrate the efficiency of the networks trained by different methods. The results show that ANTLR achieved the highest accuracy compared to other methods. The number of spikes for the timing-based method was exceptionally higher than the others, because of the no-spike penalty and its inability to remove existing spikes during training. Figure (b)b shows a different scenario we tested, where each neuron is restricted to emit at most one spike as in (Comsa et al., 2019; Mostafa, 2017; Bohte et al., 2002). We tested this situation to further reduce the number of spikes. However, this modification did not change the trend of the results as the number of spikes was already small in the first place.

Note that previous works reported higher accuracy results, but the results were achieved with large number of spikes. In this study, we focus on the cases in which the networks are forced to use fewer spikes for high energy efficiency. We believe that such cases represent more desirable environments for application of SNNs.

4.3 N-Mnist

In contrast to the MNIST dataset which is static, the spiking version of MNIST, called N-MNIST is a dynamic dataset that contains the samples of the input spikes in 34x34 spatial domain with two channels along 300 time steps (Orchard et al., 2015). The same loss and the classification settings as in Section 4.2 were used here except the target spike number for the activation-based method, which is increased to 10/0 considering the increased number of input spikes in the N-MNIST dataset. Note that the latency loss and the earliest-spike decision scheme have never been used for the N-MNIST dataset, but we intentionally used them to reduce the number of spikes. We trained the network with a size of 2x34x34-800-10 using a mini-batch size of 16 and the results are shown in Figure (a)a.

(a)
(b)
Figure 8: Test accuracy and the required number of hidden and output spikes to classify a single sample on (a) N-MNIST task and (b) N-MNIST task with the single-spike restriction. The values in the legend represent the mean and standard deviation of 16 trials.

Due to the large target spike number, the activation-based method required much more spikes than ANTLR. The timing-based method again used large number of spikes because of its limitation in removing spikes. We also tested the scenario where the single-spike restriction is applied (Figure (b)b). Since the activation-based method had to use the target spike number of 1/0 due to the restriction, its accuracy result was degraded whereas the timing-based method showed improvement in both accuracy and efficiency. This supports the fact that the activation-based method favors the multi-spike situation and the timing-based method favors the single-spike situation.

5 Discussion and conclusion

In this work, we presented and compared the characteristics of two existing approaches of gradient-based supervised learning methods for SNN and proposed a new learning method called ANTLR that combines them. The experimental results using various tasks showed that the proposed method can improve the accuracy of the network in the situations where the number of spikes are constrained, by precisely considering the influence of individual spikes.

It is known that both the temporal coding and the rate coding play important roles for information processing in biological neurons (Gerstner et al., 2014). Interestingly, the timing-based methods are closely related to the temporal coding since they explicitly consider the spike timings in gradient computation. On the other hand, the activation-based methods are more favorable to the rate coding in which the spike timing change does not contain information. Even though we did not explicitly address the concept of the temporal coding and the rate coding in this work, to the best of our knowledge, this work is the first work that tries to unify the different learning methods suitable for different coding schemes.

Some other works that were not mentioned in this paper also have shown notable results as supervised learning methods for SNNs (Jin et al., 2018; Lee et al., 2016; Zhang and Li, 2019), but these methods are not classified as neither activation-based or timing-based. In these methods, a scalar variable mediates the back-propagation from the whole spike-train of a postsynaptic neuron to the whole spike-train of a presynaptic neuron. This variable may be able to capture the current state of the spike-train and its influence to another neuron, but it cannot cope with the change in the spike-train such as the generation, the removal, or the timing shift during training. This limitation may not be problematic with the rate coding in which the change in the state of individual spikes does not make a huge difference, but it is a critical problem when training SNNs with fewer spikes for higher efficiency.

Broader Impact

We believe that broader impact discussion is not applicable to our work because our work is to improve the general supervised learning performance of spiking neural networks and is not related to a specific application.

This research was supported by Samsung Research Funding Center of Samsung Electronics under Project Number SRFC-TC1603-51, the MSIT (Ministry of Science and ICT), Korea, under the ICT Consilience Creative program (IITP-2019-2011-1-00783) supervised by the IITP (Institute for Information & communications Technology Promotion), and NRF (National Research Foundation of Korea) Grant funded by the Korean Government (NRF-2016-Global Ph.D. Fellowship Program).

References

  • F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, G. Nam, et al. (2015) Truenorth: design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE transactions on computer-aided design of integrated circuits and systems 34 (10), pp. 1537–1557. Cited by: §1.
  • S. M. Bohte, J. N. Kok, and H. La Poutre (2002)

    Error-backpropagation in temporally encoded networks of spiking neurons

    .
    Neurocomputing 48 (1-4), pp. 17–37. Cited by: §1, §2.2.2, §4.2.
  • I. M. Comsa, K. Potempa, L. Versari, T. Fischbacher, A. Gesmundo, and J. Alakuijala (2019) Temporal coding in spiking neural networks with alpha synaptic function. arXiv preprint arXiv:1907.13223. Cited by: Figure A1, §1, §1, §2.2.2, §4.2, §4.2, §4.
  • M. Davies, N. Srinivasa, T. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain, et al. (2018) Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38 (1), pp. 82–99. Cited by: §1.
  • P. U. Diehl, D. Neil, J. Binas, M. Cook, S. Liu, and M. Pfeiffer (2015) Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. Cited by: §1.
  • W. Gerstner, W. M. Kistler, R. Naud, and L. Paninski (2014) Neuronal dynamics: from single neurons to networks and models of cognition. Cambridge University Press. Cited by: §5.
  • W. Gerstner (1995) Time structure of the activity in neural network models. Physical review E 51 (1), pp. 738. Cited by: §2.1.
  • M. Harker and P. O’Leary (2008) Least squares surface reconstruction from measured gradient fields. In

    2008 IEEE conference on computer vision and pattern recognition

    ,
    pp. 1–7. Cited by: §3.4.
  • I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio (2016) Binarized neural networks. In Advances in neural information processing systems, pp. 4107–4115. Cited by: §2.2.1.
  • D. Huh and T. J. Sejnowski (2018) Gradient descent for spiking neural networks. In Advances in Neural Information Processing Systems, pp. 1433–1443. Cited by: §1, §2.2.1, §2.2.1.
  • E. Hunsberger and C. Eliasmith (2015) Spiking deep networks with lif neurons. arXiv preprint arXiv:1510.08829. Cited by: §1.
  • Y. Jin, W. Zhang, and P. Li (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. In Advances in Neural Information Processing Systems, pp. 7005–7015. Cited by: §5.
  • C. H. Jordan (2017) PyGrad2Surf. GitLab. Note: https://gitlab.com/chjordan/pyGrad2Surf/ Cited by: §3.4.
  • Y. LeCun, C. Cortes, and C. J. Burges (1998) The mnist database of handwritten digits, 1998. URL http://yann. lecun. com/exdb/mnist 10, pp. 34. Cited by: §4.2.
  • J. H. Lee, T. Delbruck, and M. Pfeiffer (2016) Training deep spiking neural networks using backpropagation. Frontiers in neuroscience 10, pp. 508. Cited by: §5.
  • W. Maass (1997) Networks of spiking neurons: the third generation of neural network models. Neural networks 10 (9), pp. 1659–1671. Cited by: §1.
  • H. Mostafa (2017) Supervised learning based on temporal coding in spiking neural networks. IEEE transactions on neural networks and learning systems 29 (7), pp. 3227–3235. Cited by: Figure A1, §1, §1, §2.2.2, §4.2, §4.2.
  • E. O. Neftci, H. Mostafa, and F. Zenke (2019) Surrogate gradient learning in spiking neural networks: bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine 36 (6), pp. 51–63. Cited by: §1, §2.1, §2.2.1.
  • G. Orchard, A. Jayawant, G. K. Cohen, and N. Thakor (2015) Converting static image datasets to spiking neuromorphic datasets using saccades. Frontiers in neuroscience 9, pp. 437. Cited by: §4.3.
  • S. Park, S. Kim, B. Na, and S. Yoon (2020) T2FSNN: deep spiking neural networks with time-to-first-spike coding. arXiv preprint arXiv:2003.11741. Cited by: §1.
  • A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019)

    PyTorch: an imperative style, high-performance deep learning library

    .
    In Advances in Neural Information Processing Systems 32, pp. 8024–8035. External Links: Link Cited by: §3.2.
  • B. Rueckauer and S. Liu (2018) Conversion of analog to spiking neural networks using sparse temporal coding. In 2018 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5. Cited by: Figure A1, §1, §1.
  • S. B. Shrestha and G. Orchard (2018) SLAYER: spike layer error reassignment in time. In Advances in Neural Information Processing Systems, pp. 1412–1421. Cited by: Figure A1, Appendix E, §1, §1, §2.2.1, §2.2.1.
  • P. J. Werbos (1990) Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 78 (10), pp. 1550–1560. Cited by: §2.2.1.
  • Y. Wu, L. Deng, G. Li, J. Zhu, and L. Shi (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in neuroscience 12. Cited by: §1, §1, §2.2.1, §2.2.1.
  • F. Zenke and S. Ganguli (2018) Superspike: supervised learning in multilayer spiking neural networks. Neural computation 30 (6), pp. 1514–1541. Cited by: §1, §1, §2.2.1.
  • L. Zhang, S. Zhou, T. Zhi, Z. Du, and Y. Chen (2019) Tdsnn: from deep neural networks to deep spike neural networks with temporal-coding. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    ,
    Vol. 33, pp. 1319–1326. Cited by: §1.
  • W. Zhang and P. Li (2019) Spike-train level backpropagation for training deep recurrent spiking neural networks. In Advances in Neural Information Processing Systems, pp. 7800–7811. Cited by: §5.

Appendix

Appendix A Versatility of the neuron model

In our neuron model, depending on the decay coefficients , the shape of the post-synaptic potential induced by a single spike can be varied. Figure A1 shows some examples cases of commonly used neuron models that can be implemented using our neuron model.

(a)
(b)
(c)
(d)
(e)
Figure A1: Various types of neuron models can be expressed by the neuron model we used, including (a) simple IF neuron, (b) LIF neuron without decaying synaptic current, (c) biologically-plausible alpha synaptic function (Comsa et al., 2019; Shrestha and Orchard, 2018), (d) non-leaky neuron with exponential PSP (Mostafa, 2017), and (e) non-leaky neuron with linear PSP (Rueckauer and Liu, 2018).

Appendix B Functional equivalence of the RNN-like description and the SRM-based description of the model

From the RNN-like description of the model (Equation 1 to 3), we can infer that the post-synaptic potential induced by , the spike activation of presynaptic neuron at time step , to , the potential of a postsynaptic neuron at later time step , can be transmitted only via . Then forwards the influence to and , and it continues with s and s along the way.

If there is no spike activation between and (), this influence can reach to , and by the time it reaches, the amount of the influence from becomes . If there is the spike activation between and (), this influence cannot be transmitted to since cuts off the signals that and receive.

If we express this relationship between and with a single kernel function and the causal set , it becomes the SRM-based description (Equation 4 and 5).

Appendix C RNN-like activation-based method

From the RNN-like description of the model (Equation 1 to 3), following BPTT-like back-propagation can be derived

(11)
(12)
(13)
(14)
(15)
(16)

that results in the gradients for the parameter update as

(17)

Appendix D Interpreting SRM-based activation-based back-propagation with RNN-like description

The forward passes of the RNN-like description and the SRM-based description are functionally equivalent, but corresponding back-propagation methods derived from them are slightly different.

The SRM-based back-propagation can be summarized using the relationship between the potentials as follows.

(18)

where the kernel function is given as

Similar to the derivation in Appendix B, following back-propagation formula can provide the same functionality as the SRM-based back-propagation.

(19)
(20)
(21)
(22)
(23)
(24)
(25)

where is introduced to consider temporal dependency between s of the same neuron at different time steps.

Those formula are almost identical to the RNN-like back-propagation (Equation 11 to 16) except how is propagated (Equation 13 and 22). The only difference is whether the reset paths (red dashed arrows in Figure (a)a, represented as and ) are considered in back-propagation or not.

Appendix E Implementation details of the learning methods

For the activation-based method and ANTLR, we used the surrogate derivative using exponential function as in (Shrestha and Orchard, 2018). For the timing-based method and ANTLR, the approximated time derivative and were calculated as and respectively.

Algorithm 1, 2, 3 show the detailed procedure for back-propagation of the activation-based method, the timing-based method, and ANTLR, respectively; is represented as for better readability, and represents a weight matrix between layer and layer . Note that and are calculated considering the loss function used (Table 1). from Appendix D was used in all methods to reduce the total number of computations by not using explicitly. For the same reason, we did not implement the for loop related to (Algorithm 2 and 3) in the actual implementation and used auxiliary variables similar to .

for  to 0 do
        for  to 0 do
               if  then
                      ;
                     
              else
                      ;
                     
               end if
              ;
               ;
               ;
              
        end for
       
end for
Algorithm 1 The activation-based back-propagation
for  to 0 do
        for  to 0 do
               if  then
                      ;
                     
              else
                      for  to  do
                             ;
                            
                      end for
                     
               end if
              if  then
                      ;
                     
              else
                      ;
                     
               end if
              
        end for
       
end for
Algorithm 2 The timing-based back-propagation
for  to 0 do
        for  to 0 do
               if  then
                      ;
                      ;
                     
              else
                      ;
                      for  to  do
                             ;
                            
                      end for
                     
               end if
              ;
               if  then
                      ;
                     
               end if
              ;
               ;
              
        end for
       
end for
Algorithm 3 ANTLR back-propagation

Appendix F Experimental settings

Hyper-parameters used for loss landscape estimation (Section 3.4) and random spike-train matching task (Section 4.1) are listed in Table A1. For latency-coded MNIST task and N-MNIST task, we grid-searched several hyper-parameter options and reported the results of the ones that provided highest valid accuracy (averaged over 16 trials). Table A2 and Table A3 show searched hyper-parameter options and the ones used for the final results.

Some of the hyper-parameters were not mentioned in the paper. grad_clip is for clipping the parameter gradients before update. init_bias_center was used as a binary option that initialize the bias with large value to ease the generation of spikes at earlier training iterations. kappa_exp is for the exponential filter used for the spike-train loss. ste_alpha and ste_beta are coefficients for the surrogate derivative described in Appendix E.

Name Value
alpha_v, alpha_i 0.95, 0.95
grad_clip 1e5
init_bias_center 0
kappa_exp 0.95
learning_rate 1e-3
optimizer ‘sgd’
ste_alpha 0.3
ste_beta 1
Table A1: Hyper-parameters used for loss landscape estimation (Section 3.4) and random spike-train matching task (Section 4.1)
Hyper-parameter Searched options Chosen for
Activation Timing ANTLR
alpha_v, alpha_i (0.95, 0.95), (0.99, 0.99) (0.99, 0.99) (0.99, 0.99) (0.99, 0.99)
beta_softmax 0.5, 1, 2 - 1 1
epoch 10 10 10 10
grad_clip 1e6, 10, 1 1e6 1e6 1e6
init_bias_center 0, 1 0 1 1
learning_rate 1e-2, 1e-3, 1e-4 1e-3 1e-4 1e-3
max_target_spikes 1 1 - -
optimizer ‘adam’ ‘adam’ ‘adam’ ‘adam’
ste_alpha 0.3, 1 1 - 1
ste_beta 1, 3 3 - 3
weight_decay 0, 1e-3, 1e-4 0 0 0
Table A2: Hyper-parameters searched and chosen for latency-coded MNIST task (Section 4.2)
Hyper-parameter Searched options Chosen for
Activation Timing ANTLR
alpha_v, alpha_i (0.95, 0.95), (0.99, 0.99) (0.99, 0.99) (0.99, 0.99) (0.99, 0.99)
beta_softmax 1/6, 1/3, 2/3 - 1/3 (1/6) 1/6
epoch 5 5 5 5
grad_clip 1e6, 10, 1 10 (1) 1 1
init_bias_center 0 0 0 0
learning_rate 1e-2, 1e-3, 1e-4 1e-3 1e-4 1e-3
max_target_spikes 1, 3, 10 (1) 10 (1) - -
optimizer ‘adam’ ‘adam’ ‘adam’ ‘adam’
ste_alpha 0.3, 1 1 - 1
ste_beta 1, 3 3 - 3
weight_decay 0, 1e-3, 1e-4 0 0 0
Table A3: Hyper-parameters searched and chosen for N-MNIST task (hyper-parameters used in the case with the single-spike coding if they are different) (Section 4.3)