I Introduction
Neuromorphic computing, specifically, Spiking Neural Networks (SNNs) have become very popular as an energyefficient alternative for implementing standard artificial intelligence tasks
[indiveri2011frontiers, pfeiffer2018deep, cao2015spiking, panda2016unsupervised, sengupta2016hybrid]. Spikes or binary events drive communication and computation in SNNs that not only is close to biological neuronal processing, but also offer the benefit of eventdriven hardware operation [ankit2017resparc, indiveri2015neuromorphic]. This makes them attractive for realtime applications where power consumption and memory bandwidth are important factors. What is lacking, however, is proper training algorithms that can make SNNs perform at par with conventional artificial neural networks (ANNs). Today, there is a plethora of work detailing different algorithms or learning rules for implementing deep convolutional spiking architectures for complex visual recognition tasks [panda2016unsupervised, diehl2015fast, lee2016training, o2013real, kheradpisheh2018stdp, masquelier2009competitive, lee2018training, lee2018deep, srinivasan2018stdp, panda2017asp, diehl2015unsupervised, masquelier2007unsupervised, hunsberger2015spiking, bellec2018long, neftci2019surrogate, mostafa2017supervised, sengupta2019going, severa2019training, srinivasan2019restocnet]. Most algorithmic proposals focus on integrating the discrete or discontinuous spiking behavior of a neuron in a supervised or unsupervised learning rule. All proposals maintain overall sparse network activity (implies low power operation) while improving the accuracy (implies better performance) on image recognition applications (mostly, benchmarked against stateoftheart datasets like Imagenet
[deng2009imagenet], CIFAR [krizhevsky2010convolutional], MNIST [lecun2010mnist]).Collating the previous works, we can broadly categorize the SNN training methodologies into three types: 1) Conversion from artificialtospiking models [sengupta2019going, diehl2015fast]
, 2) Approximate Gradient Descent (AGD) based backpropagation with spikes (or accounting temporal events)
[lee2016training, neftci2019surrogate], and 3) Unsupervised Spike Timing Dependent Plasticity (STDP) based learning [diehl2015unsupervised, srinivasan2018stdp]. Each technique presents some advantages and some disadvantages. While conversion methodology has yielded stateoftheart accuracies for large datasets like Imagenet on complex architectures (like VGG [simonyan2014very], ResNet [he2016deep]), the latency incurred to process the ratecoded image^{1}^{1}1SNNs process event data obtained with rate or temporal coding instead of realvalued pixel data. Rate coding is widely used for SNN applications, where, a realvalued pixel data is converted to a Poissondistribution based spike train with the spiking frequency proportional to the pixel value
[diehl2015unsupervised]. That is, high valued pixels output more spikes and viceversa. is very high [pfeiffer2018deep, sengupta2019going, lee2019enabling]. AGD training addresses the latency concerns yielding benefits as compared to the conversion [lee2019enabling, bellec2018long, neftci2019surrogate]. However, AGD still lags behind conversion in terms of accuracy for larger and complex tasks. The unsupervised STDP training, while being attractive for realtime hardware implementation on several emerging and nonvon Neumann architectures [ankit2017resparc, sengupta2017encoding, wang2017memristors, van2017non, perez2010neuromorphic, linares2011spike], also suffers from accuracy/scalability deficiencies.From the above discussion, we can gather that addressing Scalability, Latency and Accuracy issues are key towards achieving successful SNN methodologies. In this paper, we precisely address each of these issues through the lens of network architecture modification,
softmax classifier adaptation
andnetwork hybridization with a mix of Rectified Linear Unit/ReLU (or ANNlike) and LeakyIntegrateandFire (or SNNlike) neuronal activations in different layers
.Ii Related Work, Motivation and Contributions
Iia Addressing Scalability with Backward Residual Connections
Scalability limitations of STDP/AGD approaches arises from their depth incompatibility with deep convolutional networks which are necessary for achieving competitive accuracies. SNNs forward propagate spiking information and thus require sufficient spike activity across all layers of a deep network to conduct training. However, previous works have shown that spiking activity decreases drastically for deeper layers of a network (that we define as vanishing spike propagation), thereby, causing training issues for networks with large number of layers [kheradpisheh2018stdp, masquelier2009competitive, lee2016training, lee2018deep, srinivasan2018stdp, panda2016unsupervised, diehl2015fast].
From ANN literature, it is known that depth is key to achieving improved accuracy for image recognition applications [lecun2015deep, szegedy2015going]. Then, the question arises, can we modify the spiking network architecture to be less deep without compromising accuracy? Kubilius et al. [kubilius2018cornet] proposed Core Object Recognition or CORnet models (with what we term as backward residual connections) that transform deep feedforward ANN models into shallow recurrent models. Fig. 1 illustrates the Backward Residual (BackRes) block architecture. It is similar to that of a recurrent network unrolled over time with weights shared over repeated computations of the output. Specifically, the computations in are performed twice before processing . For , processes original input information, while, for , the same with repeated weights processes the output from previous step. Note, the original input is processed only once for . For , the block processes its output from the previous step. Essentially, BackRes connections enable a network to achieve similar logical depth as that of a deep feedforward network without introducing additional layers. The 1convolutional layer block in Fig. 1 (a) achieves the logical depth of a 2convolutional layer block as shown in Fig. 1(b) and is expected to achieve near isoaccuracy with that of the 2convolutional layer block^{2}^{2}2There is a limit to which BackRes compensates for depth diversity with isoaccuracy. VGG2x8 network with 2 convolutional layers unrolled 8 times may suffer accuracy loss as compared to a VGG16 network with 16 convolutional layers. But, VGG2x4 may yield near isoaccuracy as VGG8. Note, VGG2x4 and VGG8 have same logical depth of 8 convolutional layers.. The BackRes connection brings two key advantages: 1) Reduction in the total number of parameters since we are reusing the same weights over multiple steps of unrolling, 2) Diversification of gradient update for each unrolled step due to different inputoutput combinations.
Our Contribution: We utilize BackRes connections and the diversified gradients to enable training of logically deep SNN models with AGD or STDP that otherwise cannot be trained (with multiple layers) due to vanishing spike propagation. Further, we show that converting a deep ANN (with BackRes blocks) into a deep SNN necessitates the use of multiple thresholdspiking neurons per BackRes block to achieve lossless conversion. We also demonstrate that BackRes SNN models (say, VGG2x4) yield both lower memory complexity (proportional to number of weights/parameters) and sparser network activity with decreased computational overhead (proportional to total inference energy) as compared to a deep architecture (say, VGG8) of similar logical depth across different SNN training methodologies.
IiB Addressing Latency with Stochastic Softmax (Stochmax)
In order to incur minimal loss during pixeltospike conversion with rate coding1 (generally, used in all SNN experiments), the number of time steps of the spike train has to sufficiently large. This, in turn, increases the latency of computation. Decreasing the latency implies larger loss in imagetospike conversion that can result in lower accuracy.
Across all SNN training methodologies, the final classifier or output layer which yields the prediction result is usually a softmax layer similar to that of an ANN. It is general practice, in SNN implementation, to collect all the accumulated spiking activity over a given time duration from the penultimate layer of a deep SNN and feed it to a softmax layer that calculates the loss and prediction based on the integrated spike information
[lee2019enabling, masquelier2007unsupervised, lee2016training]. While the softmax classifier based training has produced competitive results, the latency incurred still is significantly high. The question that arises here is, ‘Can we compensate for reduced latency (or, higher loss during imagetospike conversion) by improving the learning capability of the SNN by augmenting the softmax functionality?’ Lee et al. [lee2018dropmax] proposed a stochastic version of a softmax function (stochmax) that drops irrelevant (nontarget) classes with adaptive dropout probabilities to obtain improved accuracy in ANN implementations. Stochmax can be viewed as a stochastic attention mechanism, where, the classification process at each training iteration selects a subset of classes that the network has to attend to for discriminating against other false classes. For instance, while training for a
cat instance, it is useful to train the model with more focus on discriminating against confusing classes, such as, jaguar, tiger instead of orthogonal classes like truck, whale. Softmax, on the other hand, collectively optimizes the model for target class (cat) against all remaining classes (jaguar, tiger, truck, whale) in an equally weighted manner, thereby, not involving attentive discrimination.Our Contribution: Given that stochmax improves intrinsic discrimination capability, we utilized this stochastic regularization effect to decrease the training/inference latency in SNN frameworks. We show how standard AGD can be integrated with stochmax classifier functionality to learn deep SNNs. Our analysis yields that deep SNNs of 34 layers trained with stochmax yield higher accuracy at lower latency than softmax baselines (for AGD training).
IiC Addressing Accuracy with Network Hybridization
It is evident that accuracy loss due to vanishing spike propagation and input pixeltospike coding are innate properties of SNN design that can be addressed to certain extent, but, cannot be completely eliminated. In order to achieve competitive accuracy as that of an ANN, we believe that taking a hybrid approach with a partlyartificialandpartlyspiking neural architecture will be most beneficial.
Our Contribution:
We demonstrate a hybrid neural architecture for AGD training methodologies. In case of AGD, since the training is performed endtoend in a deep network, vanishing spikepropagation becomes a limiting factor to achieve high accuracy. To address this, we use ReLU based neurons in the initial layers and have spiking leakyintegrateandfire neurons in the latter layers and perform endtoend AGD backpropagation. In this scheme, the idea is to extract relevant activity from the input in the initial layers with ReLU neurons. This allows the spiking neurons in latter layers to optimize the loss function and backpropagate gradients appropriately based on relevant information extracted from the input without any information loss.
Finally, we show the combined benefits of incorporating BackRes connections with stochmax classifiers and network hybridization across different SNN training methodologies and show latency, accuracy and computeefficiency gains. Through this work, our goal is to communicate good practices for deploying SNN frameworks that yield competitive performance and efficiency as compared to corresponding ANN counterparts.
Iii SNN: Background and Fundamentals
Iiia Input and Neuron Representation
Fig. 2(a) illustrates a basic spiking network architecture with LeakyIntegrateandFire (LIF) neurons processing ratecoded inputs1
. It is evident from Fig. 2(b) that converting pixel values to binarized spike data {
1: spike, 0: no spike} in the temporal domain preserves the integrity of the image over several time steps. The dynamics of a LIF spiking neuron is given by(1) 
The membrane potential integrates incoming spikes through weights and leaks (with time constant ) whenever it does not receive a spike. The neuron outputs a spike event when crosses certain threshold . Refractory period ensues after spike generation during which the postneuron’s membrane potential is not affected. In some cases, IntegrateandFire (IF) neurons are also used where leak value is 0 for simplicity in simulations/hardware implementations. Note, while Fig. 2 illustrates a fullyconnected network, SNNs can be constructed with a convolutional hierarchy comprising multiple layers. For the sake of notation, we will refer to networks with realvalued computations/ReLU neurons as ANNs and networks with spikebased computations/LIF or IF neurons as SNNs.
IiiB Training Methodology
IiiB1 Conversion from ANNtoSNN
To achieve higher accuracy with SNNs, a promising approach has been to convert ANNs trained with standard backpropagation into spiking versions. Fundamentally, the goal here is to match the inputoutput mapping function of the trained ANN to that of the SNN. Recent works [diehl2015fast, sengupta2019going] have proposed weight normalization and threshold balancing methods in order to obtain minimal loss in accuracy during the conversion process. In this work, we use the threshold balancing method [sengupta2019going] that yields almost zeroloss ANNtoSNN conversion performance for deep VGG/ResNetlike architectures on complex Imagenet dataset.
In threshold balancing, after obtaining the trained ANN, the first step is to generate a Poisson spike train corresponding to the entire training dataset for a large simulation duration or time period (generally, 2000  2500 time steps). The Poisson spike train allows us to record the maximum summation of weighted spike input () received by the first layer of the ANN. value for the first layer is then set to the maximum summation value. After the threshold for the first layer is set, the network is again fed the input data to obtain a spiketrain at the first layer, which serves as the input spikestream for the second layer of the network. This process of generating spike train and setting value is repeated for all layers of the network. Note, the weights during this balancing process remain unchanged. For more details on this technique, please see [sengupta2019going].
While conversion approach yields high accuracy, the computation cost is large due to high latency in processing. Reducing the time period from 2000 to 100/10 time steps causes large decline in accuracy as balancing fails to match the output rate of SNN to that of ANN.
IiiB2 Approximate Gradient Descent (AGD)
The thresholding functionality in the spiking neuron yields a discontinuous/nondifferentiable functionality making it incompatible with gradientdescent based learning methods. Consequently, several training methodologies have been proposed to incorporate the temporal statistics of SNNs and overcome the gradient descent challenges [panda2016unsupervised, lee2016training, o2013real, lee2018deep, bellec2018long, neftci2019surrogate]. The main idea is to approximate the spiking neuron functionality with a continuously differentiable model or use surrogate gradients as a relaxed version of the real gradients to conduct gradient descent training. In our work, we use the surrogate gradient approach proposed in [neftci2019surrogate].
In [neftci2019surrogate]
, the authors showed that temporal statistics incorporated in SNN computations can be implemented as a recurrent neural network computation graph (in, PyTorch, Tensorflow
[abadi2016tensorflow] frameworks) that can be unrolled to conduct Backpropagation Through Time (BPTT) [werbos1990backpropagation]. The authors in [neftci2019surrogate]also showed that using LIF computations in the forward propagation and surrogate gradient derivatives during backpropagation allows SNNs (of moderate depth) to be efficiently trained endtoend. Using a recurrent computational graph enables the use of BPTT for appropriately assigning the gradients with chain rule in the temporal SNN computations. Here, for a given SNN, rate coded input spike trains are presented and the output spiking activity at the final layer (which is usually a softmax classifier) is monitored for a given time period. At the end of the time period, the loss from the final softmax layer is calculated and corresponding gradients are backpropagated through the unrolled SNN computation graph.
Fig. 3 (a) illustrates the SNN computational graph. From an implementation perspective, we can write the dynamics of an LIF neuron in discrete time as
(2)  
(3) 
Here, the output spike train of neuron at time step is a nonlinear function of membrane potential where is the Heaviside step function and is the firing threshold. is the net input current and is the decay constant (typically in the range ). During backpropagation, the derivative of is zero everywhere except at where it is not defined. This allornothing behavior of spiking neurons stops gradient from flowing through chain rule making it difficult to perform gradient descent. We approximate the gradient using surrogate derivatives for following [neftci2019surrogate, bellec2018long] as
(4) 
where is a damping factor (set to ) that yields stable performance during BPTT. As shown in Fig. 3 (b), using a surrogate gradient (Eqn. 4) now replaces a zero derivative with an approximate linear function. For more details and insights on surrogate gradient descent training, please see [neftci2019surrogate, bellec2018long]. For convenience in notation, we will use AGD to refer to surrogate descent training in the remainder of the paper.
Using endtoend training with spiking computations enables us to lower the computation time period to 50  100 time steps. However, these methods are limited in terms of accuracy/performance and are also not suitable for training very deep networks.
IiiB3 Unsupervised STDP Learning
STDP is a correlation based learning rule which modulates the weight between two neurons based on the correlation between pre and postneuronal spikes. In this work, we use a variant of the STDP model used in [diehl2015unsupervised, srinivasan2018stdp, srinivasan2019restocnet] described as
(5) 
where is the weight update, is the learning rate, are the time instants of post and preneuronal spikes, is the STDP time constant. Essentially, the weight is increased if a preneuron triggers a postneuron to fire within a time period specified by the implying strong correlation. If the spike timing difference is large between the pre and postneurons, the weight is decreased. In [srinivasan2019restocnet], the authors implemented a minibatch version of STDP training for training convolutional SNNs in a layerwise manner. For training the weight kernels of the convolutional layers shared between the input and output maps, the pre/postspike timing differences are averaged across a given minibatch and corresponding STDP updates are performed. In this work, we perform minibatch training as in [srinivasan2019restocnet, lee2018deep]. We also use the uniform threshold adaptation and dropout scheme following [srinivasan2018stdp, lee2018deep, srinivasan2019restocnet] to ensure competitive learning with STDP. For more information on the learning rule, please see [srinivasan2018stdp, lee2018deep].
Generally, a network trained with layerwise STDP (for convolutional layers) is appended with a classifier (separately trained with backpropagation) to perform final prediction. The authors in [srinivasan2019restocnet] showed that unsupervised STDP learning (even with binary/probabilistic weight updates) of a deep SNN, appended with a fullyconnected layer of ReLU neurons, yields reasonable accuracy. However, similar to AGD, layerwise STDP training is not scalable and yields restrictive performance for deep multilayered SNNs.
Iv SNNs with BackRes Connections
BackRes allows a model to perform complex computation over multiple logical depth by means of repeated unrolling. From Fig. 1, it appears that the number of output and input channels in a BackRes block need to be equal for consistency. However, given a BackRes block with 64 input channels and 128 output channels (say, VGG2x4 network), one can randomly drop 64 channels from the output during unrolled computations. Selecting top64 channels with maximal activity, or averaging the response of 128 channels into 64 also yields similar accuracy as that of a baseline network (VGG8). For convenience, in our experiments, we use models with same input/output channels and convert them to BackRes blocks. Next, we discuss how to integrate BackRes connection for different SNN training methodologies.
Conversion : In this methodology, SNN is constructed from a trained ANN. Hence, the ANN has to incorporate BackRes connections with repeated ReLU computations (similar to Fig. 1) which then need to be appropriately matched to spiking neuronal rates. Fig. 4 illustrates the conversion from ReLU to IF neurons. Here, since unrolling each time yields a different output rate, we need to ensure that we use multiple threshold IF neurons where with threshold is activated for and with threshold for . Thus, the number of thresholds will be equal to the number of unrolling steps . During threshold balancing for conversion (see Section III.B.1), we need to set the thresholds for each layer as well as each step of unrolling within a layer separately. Interestingly, we find that increases with , i.e., . Increasing threshold implies lesser spiking activity with each unrolling which reduces the overall energy cost (results shown in Section VIII.A).
AGD Training : In AGD training, an SNN is trained endtoend with the loss calculated at the output layer using surrogate gradient descent on LIF neurons. The thresholds of all neurons are set to a userdefined value at the beginning of training and remain constant throughout the learning process. The weight updates inherently account for the balanced spiking activity given the set thresholds. Adding BackRes blocks in this case will be similar to training a recurrent model with unrolled computation, that is treating the BackRes block as a feedforward network of layers. During backpropagation, the gradients are passed through the unrolled graph of the BackRes block, where, the same weights are updated times.
Fig. 5 illustrates the backpropagation chain rule update. It is worth mentioning that the LIF activity with every unrolling varies, that eventually affects the weight update value at each step. As in conversion, we find that networks with BackRes blocks and shared weights (say, VGG2x2) generally have lower spiking activity than equivalent depth baseline network with separate layers (say, VGG4), yielding energy improvements. This implies that the repeated computation with unrolling gives rise to diverse activity that can possibly model diverse features, thereby, allowing the network to learn even with lesser depth. Note, the BackRes network and the baseline network have same through all layers when trained with AGD. Further, AGD training has scalability limitations. For instance, a 7layered VGG network fails to learn with endtoend surrogate gradient descent. However, a network with BackRes blocks with real depth of 5 layers and logical depth of 7 layers can now be easily trained and in fact yields competitive accuracy (results shown in Section VIII.A).
STDP Training : SNNs learnt with STDP are trained layerwise in an iterative manner. Generally, in one iteration of training (comprising of timesteps or time period of input presentation), a layer’s weights are updated times () depending upon the total spike activity in the pre/postlayer maps and spiking correlation (as per Eqn. (5)). Since BackRes performs repeated computations of a single layer, in this case, we make weight updates for the given layer in each iteration of STDP training. From Fig. 5, we can gather that the pre/post correlation at unrolling step will correspond to input and layer’s output that will determine its weight updates. For , the layer’s output from previous step will serve as prespiking activity based on which the weights are updated again. Similar to AGD training, the overall activity at the output of changes with which diversifies and improves the capability of the network to learn better. We also find reduced energy cost and better scalability toward large logical depth networks that otherwise (with real depth) could not be trained in a layerwise manner (results shown in Section VIII.A).
V SNNs with Stochmax
Stochmax as noted earlier is a stochastic version of a softmax function that allows a network to discriminate better by focusing or giving importance to confusing classes. A softmax classifier is defined as
(6) 
where is the target label for input , is the number of classes, and
is the logits score generated from the last feature vector
of a neural network parameterized by . With Stochmax, the objective is to randomly drop out classes in the training phase with a motivation of learning an ensemble of classifiers in a single training iteration. From Eqn. (6), we can see that making drops class completely even eliminating its gradients for backpropagation. Following this, Stochmax is defined as:(7) 
Here, we drop out classes with a probability () based on Bernoulli () trials. Further, to encode meaningful correlations in the probabilities , we learn the probabilities as an output of the neural network which takes last feature vector as input and outputs a sigmoidal value . By learning , we expect that highly correlated classes can be dropped or retained together. In essence, by dropping classes, we let the network learn on different subproblems at each iteration. In SNN implementations, we replace the softmax classifier (Eqn. (6)) with a Stochmax function (Eqn. (7)) at the output. Generally, the classifier layer is a nonspiking layer which receives accumulated input from the previous spiking layer integrated over the timesteps per training iteration. The loss is then calculated from stochmax output which is used to calculate the gradients and perform weight updates.
It is evident that AGD training where the loss function at the classifier is used to update the weights at all layers of a deep SNN will be affected by this softmaxtostochmax replacement. We find that this attentive discrimination that implicitly models many classifiers (providing different decision boundaries) per training iteration allows an SNN to be trained even with lower latency (or lesser time steps per training iteration or input presentation) while yielding high accuracy. Lower latency implies that pixeltospike input coding with Poisson rate will incur more loss. However, the deficit of the input coding gets rectified with improved classification.
In Conversion, an ANN is trained separately and is completely dissociated from the spiking statistics. STDP, on similar lines, has spikebased training of intermediate feature extractor layers. The final classifier layers (which are separately trained) are appended to the STDPtrained layers and again do not influence the weight or activity learnt in the previous layers. Thus, while Stochmax classifier inclusion slightly improves the accuracy of both conversion/STDP methods, they remain unaffected from a latency perspective.
Vi SNNs with Hybrid ReLUandLIF neurons
The objective with a partlyartificialandpartlyspiking neural architecture is to achieve improved accuracy. For artificialtospiking conversion methodology, since training is performed using ReLU neuronal units and inference with spiking integrateandfire neurons, network hybridization is not necessary and will not add to the overall accuracy. Most works on STDP learning use hybrid network architecture where STDP is used to perform feature extraction with greedy layerwise training of the convolutional layers of a deep network. Then, a onelayer fully connected ANN (with ReLU neurons) is appended to the STDP trained layers to perform final classification. However, STDP is limited in its capability to extract specific features from the input that are key for classification. We find that strengthening the ANN hierarchy of an STDPtrained SNN (either with Stochmax or deepening the ANN with multiple layers) yields significant improvement in accuracy.
In AGD, since learning is performed endtoend vanishing spikepropagation restricts the training of a deep manylayered network. For instance, a VGG7 network fails to train with AGD. In fact, even with residual or skip connections that leads to a ResNet7like architecture, the model is difficult to train. BackRes connections are potential solutions for training logically deep networks. However, to achieve better accuracy for deeper manylayered networks, there is a need to hybridize the layers of the network with ReLU and LIF neurons.
Fig. 6 illustrates the hybrid network configuration. We have ReLU neurons in initial layers and temporal LIF neurons in latter layers. During forward propagation, the input processed through the is then propagated through the unrolled over different timesteps as a recurrent computational graph to calculate the loss at the final output layer (that can be softmax/stochmax function). In the backward phase, the gradient of loss is propagated through the recurrent graph updating the weights of the SNN block with surrogate linear approximation of the LIF functionality corresponding to activity at each time step. The loss gradient calculated through BPTT are then passed through the ANNblock (which calculates the weight updates in ANN with standard chain rule). It is worth mentioning that setting up a hybrid network in a framework like PyTorch automatically performs recurrent graph unrolling for SNNblock and standard feedforward graph for ANNblock and enables appropriate gradient calculation and weight updates. We would also like to note that we feed in the output of the ANNblock as it is (without any ratecoding) to the SNNblock. That is, the unrolled SNN graph at each timestep receives the same realvalued input . We find that processing instead of ratecoded yields higher accuracy at nearlysame energy or computation cost.
Vii Experiments
We conduct a series of experiments for each optimization scheme, primarily, using CIFAR10 and Imagenet data on VGG and ResNet architectures of different depths detailing the advantages and limitation of each approach. We implemented all SNN models in PyTorch and used the same hyperparameters (such as, leak time constant,
value, input spike rate etc.) as used in [sengupta2019going], [neftci2019surrogate], [srinivasan2019restocnet] for conversion, surrogate gradient descent and STDP training, respectively. In all experiments, we measure the accuracy, latency, energy or total compute cost, and total parameters for a given SNN implementation and compare it to the baseline ANN counterpart. Latency is measured as total number of timesteps required to perform an inference for one input. In case of ANN, latency during inference is 1, while, SNN latency is the total number of timestepsover which an input is presented to the network. Note, in all our experiments, all ANNs and SNNs are trained for different number of epochs/iterations until maximum accuracy is achieved in each case.
Operation  Energy (pJ) 
32b MULT Int  3.1 
32b ADD Int  0.1 
32b MAC Int  3.2 () 
32b AC Int  0.1 () 
The total compute cost is measured as total number of floating point operations (FLOPS) which is roughly equivalent to the number of multiplyandaccumulate (MAC) or dot product operations performed per inference per input [han2015learning, han2015deep]. In case of SNN, since the computation is performed over binary events, only accumulate (AC) operations are required to perform the dot product (without any multiplier). Thus, SNN /ANN FLOPS count will consider AC/MAC operations, respectively. For a particular convolutional layer of an ANN/SNN, with input channels, output channels, input map size , weight kernel size and output size , total FLOPS count for ANN/SNN is
(8)  
(9) 
Note, in Eqn. (9) is calculated per timestep and considers the net spiking activity () that is the total number of firing neurons per layer. In general, in an SNN on account of sparse eventdriven activity, whereas, in ANNs . For energy calculation, we specify each MAC or AC operation at the register transfer logic (RTL) level for 45nm CMOS technology [han2015learning]. Considering 32bit weight values, the energy consumption for a 32bit integer MAC/AC operation () is shown in Table I. Total inference energy for ANN/SNN considering FLOPS count across all layers of a network is defined as
(10)  
(11) 
For SNN, the energy calculation considers the latency incurred as the ratecoded input spike train has to be presented over
timesteps to yield the final prediction result. Note, this calculation is a rather rough estimate which does not take into account memory access energy and other hardware architectural aspects such as inputsharing or weightsharing. Given, that memory access energy remains same irrespective of SNN or ANN network topology, the overall
EnergyEfficiency (EE) will remain unaffected with or without memory access consideration. Finally, to show the advantage of utilizing BackRes connections, we also compute the total number of unique parameters (i.e. total number of weights) in a network and calculate the compression ratio that BackRes blocks yield over conventional feedforward blocks of similar logical depth.Viii Results
Viiia Impact of BackRes Connections
Model 



#Parameters  
(———Accuracy (%)———)  
VGG7  88.74  85.88  88.56  1.2M (1x)  
VGG2x4  86.14  81.99  86.23  1.09M (1.1x)  
VGG3x2  87.34  83.31  87.15  1.13M (1.06x) 
Model  Configuration  BackRes 
VGG7  Input–Conv1(3,64,3x3/1)–Conv2(64,64,3x3/1)–  Not Applicable 
–Conv3(64,64,3x3/1)– Conv4(64,64,3x3/1)–  
–Conv5(64,64,3x3/1)–Pool(2x2/2)–Pool(2x2/2)–  
–Pool(2x2/2)–FC1(2048,512)–FC2(512,10)  
VGG2x4  Input–Conv1(3,64,3x3/1)–Conv2(64,64,3x3/1)–  [Conv2] repeated 4 times 
–Conv2(64,64,3x3/1)–Conv2(64,64,3x3/1)–  
–Conv2(64,64,3x3/1)–Pool(2x2/2)–Pool(2x2/2)–  
–Pool(2x2/2)–FC1(2048,512)–FC2(512,10)  
VGG3x2  Input–Conv1(3,64,3x3/1)–Conv2(64,64,3x3/1)–  [Conv2–Conv3] repeated 2 times 
–Conv3(64,64,3x3/1)–Conv2(64,64,3x3/1)–  
–Conv3(64,64,3x3/1)–Pool(2x2/2)–Pool(2x2/2)–  
–Pool(2x2/2)–FC1(2048,512)–FC2(512,10) 
with stride
. Pool(pp/) denotes average pooling layer with pooling window size and pooling stride . FC(X,Y) denote a fullyconnected layer with input nodes and output nodes. Layers with BackRes connections and repeated computations have been highligted in red.Model 





(Accuracy (Top1/Top5%))  
VGG16 



1.975x  




3.66x 
Model  Configuration  BackRes 
VGG16  Input–Conv1(3,64,3x3/1)–Conv2(64,64,3x3/1)–  Not Applicable 
–Pool(2x2/2)–Conv3(64,128,3x3/1)–Pool(2x2/2)–Conv4(128,256,3x3/1)–  
–Conv5(256,256,3x3/1)–Conv6(256,256,3x3/1)–Pool(2x2/2)–Conv7(256,512,3x3/1)–  
–Conv8(512,512,3x3/1)–Conv9(512,512,3x3/1)–Conv10(512,512,3x3/1)–Conv11(512,512,3x3/1)–  
–Conv12(512,512,3x3/1)–Conv13(512,512,3x3/1)–Pool(2x2/2)–Pool(2x2/2)–  
–FC1(25088,4096)–FC2(4096,1000)  
VGG11x2  Input–Conv1(3,64,3x3/1)–Conv2(64,64,3x3/1)–  [Conv5]& [Conv7Conv8Conv9] repeated 2 times 
–Pool(2x2/2)–Conv3(64,128,3x3/1)–Pool(2x2/2)–Conv4(128,256,3x3/1)–Conv5(256,256,3x3/1)–  
–Conv5(256,256,3x3/1)–Pool(2x2/2)–Conv6(256,512,3x3/1)–Conv7(512,512,3x3/1)–  
–Conv8(512,512,3x3/1)–Conv9(512,512,3x3/1)–Conv7(512,512,3x3/1)–Conv8(512,512,3x3/1)–  
–Conv9(512,512,3x3/1)–Pool(2x2/2)–Pool(2x2/2)–  
–FC1(25088,4096)–FC2(4096,1000) 
Model 





(–Accuracy%–)  
ResNet2  78.26  61.02  18.9M  1.64x/1.16x  
ResNet3  80.11  51.1  28.37M  1.81x/1.28x  
ResNet2x2  79.39  63.21  28.35M  10.56x/1.78x 
Model  Configuration  BackRes  Skip 
ResNet2  Input–Conv1(3,36,3x3/1)–Conv2(36,36,3x3/1)–  Not Applicable  InputtoConv2, Conv1toFC1 
–Pool(2x2/2)–FC1(18432,1024)–FC2(1024,10)  
ResNet3  Input–Conv1(3,36,3x3/1)–  Not Applicable  InputtoConv2, Conv1toFC1, Conv2toFC1 
–Conv2(36,36,3x3/1)–  
–Conv3(36,36,3x3/1)–Pool(2x2/2)–  
–FC1(27648,1024)–FC2(1024,10)  
ResNet2x2  Input–Conv1(3,36,3x3/1)–  [Conv2] repeated 2 times  InputtoConv2, Conv1toFC1 
–Conv2(36,36,3x3/1)–  
–Conv2(36,36,3x3/1)–Pool(2x2/2)–  
–FC1(18432,1024)–FC2(1024,10) 
First, we show the impact of incorporating BackRes Connections for conversion based SNNs. Table II compares the accuracy and total # parameters across different network topologies (described in Table III) for ANN/SNN implementations on CIFAR10 data. For the sake of understanding, we provide the unrolled computation graph of networks with BackRes blocks and repeated computations in Table III. For instance, VGG2x4 refers to a network which has two unique convolutional layers () where receives a BackRes Connection from its output and is computed times before processing the next layer as depicted in Table III. Similarly, VGG3x2 refers to a network with 3 unique convolutional layers () with computation repeated 2 times in the order depicted in Table III. Note, VGG2x4/VGG3x2 achieve the same logical depth of a 7unique layered (including fully connected layers) VGG7 network.
In Table II, we observe that accuracy of ANNs with BackRes connections suffer minimal loss (upto loss) to that of the baseline ANNVGG7 model. The corresponding converted SNNs with BackRes connections also yield nearaccuracy. It is evident that SNNs with higher computation time or latency yield better accuracy. While the improvement in total # parameters is minimal here, we observe a significant improvement in energy efficiency ( calculated using Eqn. (10), (11)) with BackRes additions as shown in Fig. 7. Note, the of SNNs shown in Fig. 7 is plotted by taking the corresponding ANN topology as baseline ( of VGG2x4 SNN is measured with respect to VGG2x4 ANN). The large efficiency gains observed can be attributed to the sparsity obtained with eventdriven spikebased processing as well as the repeated computation achieved with BackRes connections. In fact, we find that net spiking activity for a given layer decreases over repeated computations (implying a ‘sparsifying effect’) with each unrolling step (due to increasing threshold per unrolling, see Section IV). Consequently, VGG2x4 with repeated computation yields larger () than VGG3x2 ().
Table IV illustrates the Top1/Top5 accuracy, parameter compression ratio and benefits observed with BackRes connections on Imagenet dataset (for topologies shown in Table V). Note, VGG11x2 (comprising of 11 unique convolutional or fullyconnected layers) with BackRes connections and repeated computations achieves the same logical depth of 16 layers as that of VGG16. The accuracy loss in VGG11x2 (SNN) is minimal while yielding greater compared to VGG16 (SNN). We also find that for complex datasets like Imagenet, lower latency of yields very low accuracy with or without BackRes computations.
Next, we evaluate the benefits of adding BackRes connections for SNNs trained with STDP. As discussed earlier, in STDP training, the convolutional layers of a network are trained layerwise with LIF neurons. Then, an ANN classifier is appended to the STDP trained layers, wherein, the ANN classifier is trained separately on the overall spiking activity collected at the SNN layers. Table VI shows the accuracy, # parameters and benefits of SNN topologies (listed in Table VII) with respect to corresponding ANN baselines. All ANN baselines are trained endtoend with backpropagation and requires the entire CIFAR10 training dataset (50,000 labelled instances). On the other hand, all SNNs requires only 5000 instances for training the Convolutional layers. Then, the fullyconnected classifier (comprising of layers in Table VII) appended separately to the STDPlearnt layers are trained on the entire CIFAR10 dataset.
Model 






(——Accuracy%——)  
VGG5  75.86  71.92  72.77  2.21M  14.75x  
VGG3x2  74.99  71.07  71.97  2.18M  16.2x  
VGG7  72.26      2.3M    
VGG3x4  69.52  74.23  75.01  2.19M  26.44x 
Model  Configuration  BackRes 
VGG5  Input–Conv1(3,64,3x3/1)–Conv2(64,64,3x3/1)–  Not Applicable 
–Pool(2x2/2)–Conv3(3,64,3x3/1)–Conv4(64,64,3x3/1)–  
–Pool(2x2/2)–FC1(4096,512)–FC2(512,10)  
VGG3x2  Input–Conv1(3,64,3x3/1)–Conv2(64,64,3x3/1)–  [Conv3] repeated 2 times 
–Pool(2x2/2)–Conv3(3,64,3x3/1)–Conv3(64,64,3x3/1)–  
–Pool(2x2/2)–FC1(4096,512)–FC2(512,10)  
VGG7  Input–Conv1(3,64,3x3/1)–Conv2(64,64,3x3/1)–  Not Applicable 
–Pool(2x2/2)–Conv3(3,64,3x3/1)–Conv4(64,64,3x3/1)–  
–Conv5(3,64,3x3/1)–Conv6(64,64,3x3/1)–  
–Pool(2x2/2)–FC1(4096,512)–FC2(512,10)  
VGG3x4  Input–Conv1(3,64,3x3/1)–Conv2(64,64,3x3/1)–  [Conv3] repeated 4 times 
–Pool(2x2/2)–Conv3(3,64,3x3/1)–Conv3(64,64,3x3/1)–  
–Conv3(3,64,3x3/1)–Conv3(64,64,3x3/1)–  
–Pool(2x2/2)–FC1(4096,512)–FC2(512,10) 
From Table VI, we observe that SNN accuracy is considerably lower than corresponding ANN accuracy. This can be attributed to the limitation of STDP training to extract relevant features in an unsupervised manner. In fact, deepening the network from ResNet2 to ResNet3 causes a decline in accuracy corroborating the results of previous works [srinivasan2019restocnet]. However, adding BackRes connection in ResNet2x2 which achieves same logical depth as ResNet3 improves the accuracy of the network while yielding significant gains () in terms of . For , we show the gains considering the full network (including spiking convolutional and ReLU FC layers), as well as, the gain considering only the spiking convolutional layers . The spiking layers on account of eventdriven sparse computing exhibit higher efficiency than the full network (i.e. ). Interestingly, ResNet2x2 yields higher efficiency at the spiking layers which further supports the fact that BackRes connections have a ‘sparsifying’ effect on the intrinsic spiking dynamics of the network. This result establishes the advantage of BackRes connection in enabling scalability of STDPbased SNN training methodologies towards larger logical depth while yielding both accuracy and efficiency improvements.
For AGD training, BackRes additions yield both accuracy and scalability related benefits. Table VIII shows the accuracy, # parameters and benefits of SNN topologies (listed in Table IX) for different latency with respect to corresponding ANN baselines. Similar to Conversion/STDP results, endtoend AGD training with spiking statistics (using surrogate gradient descent) for VGG5 and VGG3x2 of equivalent logical depth as VGG5 yields minimal accuracy loss () and large gains () in comparison to corresponding ANNs. However, for a network with 7layered depth, AGD fails to train an SNN endtoend due to vanishing forward spikepropagation. Interestingly, a VGG3x4 network with similar logical depth of 7layers as VGG7 and repeated computations not only trains well with AGD, but also yields higher accuracy than both VGG7/VGG3x4 ANN baselines. This implies that LIF neurons with spiking statistics have the potential of yielding more diversified computation profile with BackRes unrolling than ReLU neurons. In addition to accuracy and scalability benefits, SNNs with BackRes connections yield high benefits as shown in Table VIII (due to the inherent ‘sparsifying’ effect) that point to their suitability for lowpower hardware implementations.
Model  

50.4  65.24  70.2  

49.1  64.44  67.1  


Model 



(—–Accuracy%—–)  (—for —)  
VGG5  75.26  75.92  23.83x  1.62x  
VGG3x2  72.62  73.17  31.88x  1.97x 
Model 




(Accuracy%)  
ResNet2  83.5  77.92  1.64x/1.08x  
ResNet3  79.85  76.52  1.81x/1.69x  
ResNet2x2  83.2  80.1  10.56x/2.14x 
Model  Configuration  

Input–Conv1(72,72,3x3/1)–Conv2(72,72,3x3/1)–  
–Pool(2x2/2)–Conv3(72,144,3x3/1)–  
–Conv4(144,144,3x3/1)–Pool(2x2/2)–  
–FC1(2304,1024)–FC2(1024,10)  

Input–Conv1(108,108,3x3/1)–Conv2(108,108,3x3/1)–  
–Pool(2x2/2)–Conv3(108,216,3x3/1)–  
–Conv4(216,216,3x3/1)–Pool(2x2/2)–  
–FC1(3456,1024)–FC2(1024,10) 
ViiiB Impact of Stochmax
Stochmax is essentially a classificationperformance improvement technique that can result in improved latency benefits. First, we show the impact of incorporating stochmax classifier for SNNs trained with AGD. Table X compares the accuracy of small VGG3 SNN trained with AGD for different latency . Here, the layer of VGG3 topology is implemented as a softmax or stochmax classifier. We observe a consistent improvement in accuracy for stochmax implementations. In Table XI, we show the accuracy results for SNNs of VGG5/VGG3x2 topology with stochmax classifiers. It is evident that stochmax improves the performance by as compared to softmax implementations in Table VIII. In addition to accuracy, we also observe a larger gain in energyefficiency with stochmax implementations. We find that conducting endtoend AGD training with stochmax loss leads to sparser spiking activity across different layers of a network as compared to softmax. We believe this might be responsible for the efficiency gains. Further theoretical investigation is required to understand the role of loss optimization in a temporal processing landscape towards decreasing the spiking activity without affecting the gradient values. Table X, XI results suggest stochmax as a viable technique for practical applications where we need to obtain higher accuracy and energy benefits with constrained latency or processing time.
Inclusion of stochmax classifier in SNNs trained with conversion/STDP training results in a slight improvement in accuracy for CIFAR10 data (for VGG7/ResNet3 topologies from Table II, VI), respectively. Since stochmax is dissociated from the training process in both STDP/conversion, the latency and energy efficiency results are not affected. Note, all results shown in Table II  VIII use softmax classifier.
ViiiC Impact of Hybridization
Model 





(Accuracy%)  
VGG9  83.33  84.98  5.96M  3.98x  
VGG8x2  83.49  84.26  5.37M  4.1x 
Model  Configuration  BackRes 
VGG9  Input–Conv1(3,64,3x3/1)ReLU–  Not Applicable 
–Conv2(64,64,3x3/1)ReLU–Pool(2x2/2)–  
–Conv3(64,128,3x3/1)LIF–  
–Conv4(128,128,3x3/1)LIF–Pool(2x2/2)–  
–Conv5(128,256,3x3/1)LIF–  
–Conv6(256, 256,3x3/1)LIF–  
–Conv7(256,256,3x3/1)LIF–Pool(2x2/2)–  
–FC1(4096,1024)LIF–FC2(1024,10)  
VGG8x2  Input–Conv1(3,64,3x3/1)ReLU–  [Conv6] repeated 2 times 
–Conv2(64,64,3x3/1)ReLU–Pool(2x2/2)–  
–Conv3(64,128,3x3/1)LIF–  
–Conv4(128,128,3x3/1)LIF–Pool(2x2/2)–  
–Conv5(128,256,3x3/1)LIF–  
–Conv6(256, 256,3x3/1)LIF–  
–Conv6(256,256,3x3/1)LIF–Pool(2x2/2)–  
–FC1(4096,1024)LIF–FC2(1024,10) 
Except for Conversion, both STDP and AGD training techniques fail to yield high accuracy for deeper network implementations. While BackRes connections and Stochmax classifiers improve the accuracy, an SNN still lags behind its corresponding ANN in terms of performance. To improve the accuracy, we employ hybridization with partially ReLU and partially LIF neurons for SNN implementations.
For STDP, we strengthen the classifier that is appended to the STDP trained convolutional layers to get better accuracy. Essentially, we replace the fullyconnected layers of the topologies in Table VII with a larger convolutional network ( topology description is given in Table XIII). Table XII shows the accuracy, results for the STDP trained ResNet topologies appended now with corresponding and compared to a similar ANN baseline (say, ResNet2 ANN corresponds to an ANN with ResNet2 topology with FC layers replaced by ConvNN classifier from Table XIII). Strengthening the classifier hierarchy now results in higher accuracies () comparable to the ANN performance of Table VI, while still lagging behind the ANN baseline of similar topology. However, the accuracy loss between ANN and SNN in this case reduces quite significantly ( loss in Table VI to loss in Table XII). Similar to Table VI, for , the gains considering only spiking layers are greater than that of the full network.
For AGD, as discussed in Section VI, we hybridize our network with initial layers comprising of ReLU and latter layers of LIF neurons and perform endtoend gradient descent. Table XIV shows the accuracy and gain results for a VGG9, VGG8x2 model (topology description in Table XV) with BackRes connection trained using hybridization for CIFAR10 dara. Note, only the first two convolutional layers use ReLU activation, while the remaining layers use LIF functionality. In addition, we use a stochmax classifier at the end instead of softmax to get better accuracy. Earlier, we saw that a 7layered network could not be trained with AGD (see Table VIII). Inclusion of ReLU layers now allows a deep 9layered network to be trained endtoend while yielding considerable energyefficiency gain with slightly improved accuracy (
improvement in accuracy in SNN) in comparison to a corresponding ANN baseline (note, ANN baseline has ReLU activation in all layers). To have fair comparison between ANN and SNN, ANN baselines are trained without any batch normalziation or other regularization techniques. Including batch normalization and dropout in ANN training yields
accuracy that is still fairly close to accuracy obtained with the SNN implementations. To calculate gains in hybrid SNN implementations, we consider MAC energy for ReLU layers ( in Table XIV) and AC energy for remaining LIF layers ( in Table XIV). VGG8x2 achieves equivalent logical depth as VGG9. Similar to earlier results, VGG8x2 yields slightly higher benefit than VGG9 on account of the ‘sparsifying’ effect induced by BackRes computations.Table XVI shows the results of a VGG13 model (topology description in Table XVII)) trained with hybrid ReLU/LIF neuron layers on Imagenet dataset learn with endtoend gradient descent. Interestingly, for Imagenet data, we had to use ReLU neuronal activations both in the beginning as well as at the end as shown in Table XVII. After some trialanderror analysis, we found that training with more LIF neuronal layers for a complex dataset like Imagenet did not yield good performance. In case of a VGG13 network, converting the middle two layers into spiking LIF neurons yielded isoaccuracy as that of a fullyReLU activation based ANN. Even with a minor portion of the network offering sparse neuronal spiking activity, we still observe improvement in with our hybrid model over the standard ANN. It is also worth mentioning that the spiking LIF neurons of the hybrid VGG13 network have a lower processing latency of . We believe that using ReLU activations in majority of the VGG13 network enabled us to process the spiking layers at lower latency. We can expect higher gains by adding suitable backward residual connections in the spiking layers to compensate for depth. It is evident that hybridization incurs a natural tradeoff between number of spiking/ReLU layers, processing latency, accuracy and energyefficiency. Our analysis shows that hybridization can enable endtoend backpropagation training for largescale networks on complex datasets while yielding efficiency gains. Further investigation is required to evaluate the benefits of hybridization in largescale setting by varying the tradeoff parameters.
Model 




(Accuracy%)  
VGG13 


1.31x 
Model  Configuration  BackRes 
VGG13  Input–Conv1(3,64,3x3/1)ReLU–  Not Applicable 
–Conv2(64,64,3x3/1)ReLU–Pool(2x2/2)–  
–Conv3(64,128,3x3/1)ReLU–  
–Conv4(128,128,3x3/1)ReLU–Pool(2x2/2)–  
–Conv5(128,256,3x3/1)ReLU–  
–Conv6(256,256,3x3/1)ReLU–Pool(2x2/2)–  
–Conv7(256, 512,3x3/1)LIF–  
–Conv8(512,512,3x3/1)LIF–Pool(2x2/2)–  
–Conv9(512,512,3x3/1)ReLU–  
–Conv10(512,512,3x3/1)ReLU–Pool(2x2/2)–  
–FC1(25088,4096)ReLU–FC2(4096,4096)  
–FC3(4096,1000) 
Ix Discussion & Conclusion
With the advent of Internet of Things (IoT) and the necessity to embed intelligence in devices that surround us (such, smart phones, health trackers), there is a need for novel computing solutions that offer energy benefits while yielding competitive performance. In this regard, SNNs driven by sparse eventdriven processing hold promise for efficient hardware implementation of realworld applications. However, training SNNs for largescale tasks still remains a challenge. In this work, we outlined the limitation of the three widely used SNN training methodologies (Conversion, AGD training and STDP), in terms of, scalability, latency and accuracy, and proposed novel solutions to overcome them.
We propose using backward residual (or BackRes) connections to achieve logically deep SNNs with shared network computations and features that can approach the accuracy of fullydeep SNNs. We show that all three training methods benefit from the BackRes connection inclusion in the network configuration, especially, gaining in terms of energyefficiency () while yielding isoaccuracy with that of an ANN of similar configuration. We also find that BackRes connections induce a sparsifying effect on overall network activity of an SNN, thereby, expending lower energy ( lower) than an equivalent depth fulllayered SNN. In summary, BackRes connections address the scalability limitations of an SNN that arise due to depth incompatibility and vanishing spikepropagation of different training techniques.
We propose using stochastic softmax (or stochmax) to improve the prediction capability of an SNN, specifically, for AGD training method that uses endtoend spikebased backpropagation. We find a significant improvement in accuracy () with stochmax inclusion even for lower latency or processing time period. Further, stochmax loss based backpropagation results in lower spiking activity than the conventional softmax loss. Combining the advantages of lower latency and sparser activity, we get higher energyefficiency improvements () with stochmax SNNs as compared to softmax SNNs. Conversion/STDP training do not benefit in terms of efficiency and latency from stochmax inclusion since the training in these cases are performed fully/partially with ANN computations.
The third technique we propose is using a hybrid architecture with partlyReLUandpartlyLIF computations in order to improve the accuracy obtained with STDP/AGD training methods. We find that hybridization leads to improved accuracy at lower latency for AGD/STDP methods, even circumventing the inadequacy of training very deep networks. The accuracies observed for CIFAR10 () with STDP/AGD on hybrid SNN architectures are in fact comparable/better than ANNs of similar configuration. We would like to note that hybridization also offers significant energyefficiency improvement () over a fully ReLUbased ANN. In fact, using hybridization, we trained a deep VGG13 model on Imagenet data and obtained isoaccuracy as that of its ANN counterpart with reasonable energyefficiency gains. There are interesting possibilities of performing distributed edgecloud intelligence with such hybrid SNNANN architecture where, SNN layers can be implemented on resourceconstrained edge devices and ANN layers on the cloud.
Finally, SNNs are a prime candidate today towards enabling lowpowererd ubiquitous intelligence. In this paper, we show the benefit of using good practices while configuring spiking networks to overcome their inherent training limitations, while, gaining in terms of energyefficiency, latency and accuracy for image recognition applications. In the future, we will investigate the extension of the proposed methods for training recurrent models for natural language or video processing tasks. Further, conducting reinforcement learning with the above proposed techniques to analyze the advantages that SNNs offer is another possible future work direction.
Acknowledgment
This work was supported in part by CBRIC, Center for Braininspired Computing, a JUMP center sponsored by DARPA and SRC, by the Semiconductor Research Corporation, the National Science Foundation, Intel Corporation, the Vannevar Bush Faculty Fellowship and the U.K. Ministry of Defense under Agreement Number W911NF1630001.
Comments
There are no comments yet.