Deep Networks Incorporating Spiking Neural Dynamics

12/17/2018 ∙ by Stanisław Woźniak, et al. ∙ 48

Neural networks have become the key technology of Artificial Intelligence (AI) that contributed to breakthroughs in several machine learning tasks, primarily owing to advances in Artificial Neural Networks (ANNs). Neural networks incorporating spiking neurons have held great promise because of their brain-inspired temporal dynamics and high power efficiency, however scaling out and training deep Spiking Neural Networks (SNNs) remained significant challenges - they still fall behind ANNs in terms of accuracy on many traditional learning tasks. In this paper, we propose an alternative perspective on the spiking neuron as a particular ANN construct called Spiking Neural Unit (SNU). Specifically, the SNU casts the stateful temporal spiking neuron, of a LIF type, as a recurrent ANN unit, analogous to LSTMs and GRUs. Moreover, by introducing the concept of proportional reset, we generalize the LIF dynamics to a variant called soft SNU (sSNU). The proposed family of units has a series of benefits. Firstly, the ANN training methods, such as backpropagation through time, naturally apply to them and enable a simple approach for successful training of deep spiking networks. For example, a 4- or 7-layer SNN trained on a temporal version of MNIST dataset provides higher accuracy in comparison to RNNs, LSTM- and GRU-based networks of similar architecture. Secondly, for the task of polyphonic music prediction on the JSB dataset, an sSNU-based network surpasses the state-of-the-art performance of RNNs, LSTM- and GRU-based networks. The novel family of units introduced in this paper bridges the biologically-inspired SNNs with ANNs. It provides a systematic methodology for implementing and training deep networks incorporating spiking dynamics that achieve accuracies as high, or better than, state-of-the-art ANNs. Thus, it opens a new avenue for the widespread adoption of SNNs to practical applications.

READ FULL TEXT VIEW PDF

Authors

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The research interest in neural networks has considerably grown over the recent years owing to their remarkable success in many applications. Record accuracy was obtained in deep networks applied for image classification krizhevsky_imagenet_2012 ; szegedy_going_2015 ; he_deep_2016 , and new architectures enabled solving cognitively-challenging tasks, such as multiple object detection trained end-to-end redmon_you_2016 , pixel-level segmentation of images he_mask_2017-1 , or even the playing of computer games based on raw screen pixels mnih_human-level_2015 . Moreover, sequence-to-sequence models enabled language translation sutskever_sequence_2014 and the combination of convolutional layers with recurrent layers has led to language-independent end-to-end speech recognition amodei_deep_2016 . These architectures surpassed the performance of traditional domain-specific models, and established neural networks as the standard approach in the industry.

Although the term neural networks elicits associations to the sophisticated functioning of the brain, the advances in the field were obtained by extending the original simple ANN paradigm of the 50’s to complex deep neural networks trained with backpropagation. The ANNs take only high-level inspiration from the structure of the brain comprising neurons interconnected with synapses, which results in human-like performance, albeit at a much higher power budget than the

 20 W required by the human brain. At the same time, the neuroscientific community –- whose focus is understanding the brain dynamics –- has been exploring architectures with more biologically-realistic dynamics, such as SNNs that in their simplest form consist of Leaky Integrate-and-Fire (LIF) neurons dayan_theoretical_2005 ; eliasmith_how_2013 ; gerstner_neuronal_2014 . The SNN paradigm encompasses rich temporal dynamics, due to the brain-inspired LIF neurons, and promises low power consumption, due to the use of sparse asynchronous voltage pulses, called spikes, to compute and propagate information. Thus, SNNs are considered to be the next generation of neural networks beyond ANNs maass_networks_1997 , with advantages stemming from their efficient implementation, rich dynamics and novel learning capabilities.

From an implementation perspective, the inherent characteristics of SNNs have led to highly-efficient computing architectures with collocation of memory and processing units. For example, incorporation of the design principles of the SNN paradigm in the field of neuromorphic computingmead_neuromorphic_1990 has led to the development of non-von Neumann systems with significantly increased parallelism and reduced energy consumption, demonstrated in chips such as FACETS/BrainScales meier_mixed-signal_2015 , Neurogrid benjamin_neurogrid:_2014 , IBM’s TrueNorthcassidy_real-time_2014 and Intel’s Loihidavies_loihi:_2018 . Moreover, recent breakthroughs in the area of memristive nanoscale devices have enabled further improvements in area and energy efficiency of mixed digital-analog implementations of synapses and spiking neurons kuzum_synaptic_2013 ; tuma_stochastic_2016 ; wozniak_learning_2016 ; pantazi_all-memristive_2016 ; tuma_detecting_2016 .

From a neural and synaptic dynamics perspective, large-scale simulations following the neuroscientific insights were performed to explore the activity patterns of SNNs markram_blue_2006 ; izhikevich_large-scale_2008 ; ananthanarayanan_cat_2009 ; markram_human_2012 , or to address concrete interesting cognitive problems eliasmith_large-scale_2012 ; rasmussen_spiking_2014 . However, the most appealing models eliasmith_large-scale_2012 involved complex task-specific architectures and the execution of their dynamics could take up to 2.5 hours and 24GB RAM to calculate one second response. Owing to lack of holistic understanding of large-scale activity patterns and complex design of cognitive simulations, further bottom up research of simpler architectures is needed. Specifically, approaches such as that in which the competitive dynamics of inhibitory circuits were abstracted to Winner-Take-All (WTA) architectures maass_computational_2000 , or that in which the rich recurrent dynamics were exploited in Liquid State Machines maass_real-time_2002 , provide direct examples of cases where SNN dynamics can enhance the computational capabilities.

Finally, from a learning perspective, biologically-inspired unsupervised Hebbian learning rules, such as Spike-Timing-Dependent Plasticity (STDP)markram_regulation_1997 ; song_competitive_2000 and its extension to Fatiguing STDP (FSTDP) moraitis_fatiguing_2017 , were applied for correlation detection song_competitive_2000 ; gutig_learning_2003 ; tuma_stochastic_2016 ; wozniak_learning_2016 ; pantazi_all-memristive_2016 ; tuma_detecting_2016 ; moraitis_fatiguing_2017 or high frequency signals sampling tuma_stochastic_2016 . STDP in WTA networks was applied for handwritten digit recognition querlioz_simulation_2011 ; diehl_unsupervised_2015 ; sidler_unsupervised_2017 , but yielded limited accuracy. The reason is an insufficient generalization of the internal representation of knowledge in WTA architectures, effectively implementing a -NN algorithm wozniak_THESIS_2017 . Further improvements in the internal representation were obtained through feature learning bichler_extraction_2012 ; burbank_mirrored_2015 ; wozniak_IJCNN_2017 .

However, despite the great promise of SNNs in terms of efficient implementation, rich dynamics and learning capabilities, it has been unclear how to effectively train large generic networks of spiking neurons to reach the accuracy of ANNs for common machine learning tasks. The main reason was attributed to lack of a scalable supervised SNN learning algorithm, such as the backpropagation (BP) in ANNs. It was shown that porting the weights from ANNs trained with BP to SNNs enhances their performance oconnor_real-time_2013 ; diehl_conversion_2016 ; rueckauer_conversion_2018

. Moreover, there have been multiple attempts to develop supervised learning approaches inspired by the idea of BP

bohte_error-backpropagation_2002 ; anwani_normad_2015 , and also to explore how STDP could perform BP bengio_stdp-compatible_2017 ; tavanaei_bp-stdp:_2017 , or how to directly implement BP. Examples include: using a differentiable approximation of the LIF response function hunsberger_spiking_2015 , applying BP instead of STDP when output spikes are emitted esser_convolutional_2016 , deriving BP formulas on lowpass-filtered neuronal activities lee_training_2016 , or using the concept of backpropagation through time with differentiable approximations of temporal network dynamics huh_gradient_2017 ; bellec_long_2018 . Despite many performance improvements, these state-of-the-art architectures usually do not surpass the accuracy of ANNs on common datasets and involve sophisticated implementation.

In this paper, we take a different perspective of the relationship between the ANNs and SNNs. We reflect on the nature of the spiking neural dynamics and postulate to unify the SNN model with its ANN counterpart. In the first part of the paper, we focus on the LIF dynamics and provide a constructive proof that a spiking neuron can be transformed into a simple novel recurrent ANN unit called Spiking Neural Unit (SNU) that shares many similarities with those of the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). Moreover, we generalize the LIF dynamics to a non-spiking case by introducing a so-called soft variant of SNU (sSNU). In the second part, we address the learning challenges. We show that for the proposed units the existing ANN frameworks can be naturally reused for training with backpropagation through time, enabling a very easy implementation and successful learning of any SNN architecture. Finally, we demonstrate the efficacy of our approach by training deep SNNs with up to seven layers and analyzing the performance of two tasks: handwritten digit recognition and polyphonic music prediction. The results obtained from the SNU family-based networks show competitive performance compared to RNNs, LSTM- and GRU-based networks – the state-of-the-art models commonly used in tasks involving temporal data.

Ii Neuron models and temporal data

Conventional ANNs are feed-forward architectures implementing layers of neurons described by the equation

(1)

where and

are the input and output vectors,

is the weights matrix, is a vector of biases and

is the activation function, such as a sigmoid

or the rectified linear function

, used in Rectified Linear Units (ReLUs)

nair_rectified_2010 .

Such ANNs are stateless and have no inherent notion of time, yet through an appropriate transformation of the last temporal inputs into a large spatial input vector waibel_modular_1989 , it is possible to utilize these neuronal models with temporal data. However, this solution is computationally inefficient, because a larger model with -times more inputs needs to be evaluated during each time step, and the system needs to implement shift buffering or delay lines to provide the required past inputs.

A Recurrent Neural Network (RNN) is an extension of an ANN that is capable of operating directly on temporal data. This is achieved through the introduction of recurrent connections between the neurons within each layer

(2)

where indicates the discrete time and denotes the matrix of recurrent weights. As a result, streams of temporal data may be directly fed into such networks that maintain the temporal context in the transient activation values circulating through the recurrent connections.

However, the rapid transient dynamics of RNNs posed significant challenges for training these models on long sequences. For each time step, the temporal context of the entire RNN layer is combined with the new inputs and transformed by the non-linear activation function. This leads to a rapid saturation of the neuronal activations that negatively impacts the learning – known as the vanishing gradients problem. The solution was to provide a long-term temporal context that was unaffected by the non-linear interactions outside the neuronal cells. This was achieved by introducing into the neurons an internal state variable called carry

. Its dynamics is controlled by surrounding trainable gates that form together a stateful Long Short-Term Memory (LSTM) unit gers_learning_1999

(3)

where multiple RNN units are combined and indexed with , , to denote the input, the output, and the forget gate, respectively. Moreover, and are input and output activation functions, and denotes the inner product. This approach became the state-of-the-art in recurrent networks.

Recently, the Gated Recurrent Units (GRUs) cho_properties_2014 became a popular alternative to the LSTM units that achieve similar performance with fewer gates chung_empirical_2014 . They are formulated as

(4)

where is called an update gate, is called a reset gate and is the activation function.

Meanwhile, SNNs have been developing almost independently from ANNs. The common basic Leaky Integrate-and-Fire (LIF) spiking neuron model is inherently temporal, comprising a state variable , called the membrane potential, with the dynamics described by the differential equation gerstner_neuronal_2014

(5)

where is the time constant of the neuron, and represent the resistance and the capacitance of the neuronal cell soma, and is the incoming current from the synapses. The synapses of a neuron receive spikes and modulate them by the synaptic weights to provide the input current to the neuronal cell soma. The input current is integrated into the membrane potential . When crosses a firing threshold at time , an output spike is emitted: , and the membrane potential is reset to the resting state , often defined to be equal to .

It is common to describe the membrane potential dynamics using a discrete-time approximation that is obtained from Eq. 5 assuming a discretization step

(6)

Assuming that we do not consider the temporal dynamics of biologically-realistic models of synapses and dendritesgerstner_neuronal_2014 , the input current may be defined as . This formulation provides a simple framework for the analysis of the LIF dynamics, commonly explored in SNN research. However, understanding how to take advantage of these temporal dynamics to build large-scale generic deep spiking networks that would achieve high accuracy on common machine learning tasks has remained an open question.

Iii LIF dynamics in an ANN framework

Here, we introduce a novel way of looking at SNNs that makes their temporal dynamics easier to understand and enables them to be incorporated in deep learning architectures. In particular, we propose a succinct high-level model of a spiking LIF neuron, which we call a Spiking Neural Unit (SNU). The SNU comprises two ANN neurons as subunits:

, which models the membrane potential accumulation dynamics, and , which implements the spike emission, as illustrated in Fig. 1. The integration dynamics of the membrane-potential state variable is realized through a single self-looping connection to in the accumulation stage. The spike emission is realized through a neuron with step activation function. Simultaneously, an activation of controls the resetting of the state variable by gating the self-looping connection at . Thus, SNU –- a discrete-time abstraction of a LIF neuron –- represents a construct that is directly implementable as a neural unit in ANN frameworks.

Figure 1: Spiking Neural Unit: The unit comprises an accumulation stage with a neuron with a self-looping connection, and a spiking stage with a neuron with a step-wise activation function and a gating connection performing the state reset. Dashed lines indicate connections with time-lag.

Following the standard ANN convention, the formulas that govern the computation occurring in a layer of SNUs are as follows

(7)

where is the vector of internal state variables calculated by the subunits, is the output vector calculated by the subunits, is the accumulation stage activation function, and is the output activation function.

is a ReLU, i.e.,

is the rectified linear activation function, based on the assumption that the membrane potential value is bounded by the resting state . The inputs are weighted by the synaptic weights in matrix and there is no bias term. The self-looping weight applied to the previous state value performs a discrete time approximation of the membrane potential decay that occurred in the time period . The last term relies on the binary output values of the spiking output to either retain the state, or reset it after spike emission. is a thresholding neuron, i.e. it has a step activation function , which returns that corresponds to an output spike if , or otherwise. There is no weight on the connection from , but it is biased with to implement the spiking threshold.

The parameters of the SNUs are , and . If is fixed to 1, the state does not decay and the SNU corresponds to the Integrate-and-Fire (IF) neuron without the leak term. Otherwise, these parameters correspond to the parameters of the LIF neuron introduced in Eq. 6, i.e.,

(8)

Thus, the same set of parameter values can be used in an SNU-based network, implemented by utilizing standard ANN frameworks, as well in a native LIF-based implementation, utilizing standard SNN frameworks, or even in neuromphorphic hardware. To demonstrate this, we have used TensorFlow

111 http://www.tensorflow.org to produce sample plots of the spiking dynamics for a single SNU in Fig. 2. As can be seen, the state variable of the SNU increases each time an input spike arrives at the neuron, and decreases following the exponential decay dynamics. When the spiking threshold is reached, an output spike is emitted and the membrane potential is reset. These dynamics are aligned with the reference LIF dynamics, which we obtained for the corresponding parameters by running a simulation in a well-known Brian2 222 http://brian2.readthedocs.io SNN framework.

Figure 2: Spiking dynamics: Random input spikes are fed into an SNU with weights set to , , , . corresponds to and the dashed vertical lines depict the output spikes . The blue circles show the LIF dynamics obtained using the Brian2 framework.

iii.1 Relaxing the SNN constraints

In SNNs, information is transmitted throughout the network with all-or-none spikes, typically modeled as binary values. As a result, the input data is binarized, and the step function is used to determine the binary neuronal outputs. However, the proposed SNU implementation, within the ANN framework, allows the all-or-none constraint to be relaxed, thus allowing the benefits of a variant of the SNU, called soft SNU (sSNU) to be explored. The sSNU is a member of the family of SNUs characterized by the dynamics in Eq. 

7. It generalizes this dynamics to non-spiking ANNs, in which the input data does not have to be binarized and the activation function

is set to a sigmoid function. This formulation has the additional interesting property of an analog proportional reset, i.e., the magnitude of the output determines what fraction of the membrane-potential state variable is retained. Exploiting the intermediate values at all stages of processing, viz., input, reset and output, facilitates on-par performance comparison of LIF-like dynamics with other ANN models, eliminating any potential performance loss stemming from the limited value resolution of the standard SNU.

The sSNU concept has another interesting intuitive interpretation. In a sense, the 0 or 1 binarized output of a neuron represents its confidence in a certain hypothesis concerning the information presented at its inputs. In handling static data, all relevant information is presented simultaneously at the inputs and, as a consequence, an artificial neuron would directly output its confidence regarding this input information. However, in cases with temporal data, the information is spread over time and the LIF neurons collect it in the membrane potential wozniak_THESIS_2017 . When enough information aligned with the hypothesis has been collected, an output spike transmits this fact to the downstream neurons and restarts the process through the state reset. However, with sSNU, a floating point output is always transmitted. To avoid repeated retransmission of the same information to the downstream neurons, the value of the membrane potential has to be reduced. On the other hand, in order to retain certain memory, the neuron should not be fully reset at each time step. Thus, the solution provided by sSNU is to attenuate the state variable proportionally to the output value transmitted to the downstream neurons.

iii.2 Structural comparison

The temporal context of the units from the SNU family is captured through the internal state corresponding to the membrane potential of the LIF neuron. In this sense, the structure of the SNUs is similar to LSTM or GRU in that it also relies on their internal state as a means of storing temporal context. This structural similarity is visible in Figs. 3

a-c, in which all the aforementioned units have an internal state that is maintained through a recurrent loop within the units’ boundaries, drawn in gray. Besides similarities in the structure, the SNUs possess unique features not present in the other models, viz., a non-linear transformation

within the internal state loop, a parametrized state loop connection, a bias of the state output connection to the output activation function , indicated by bold arrows in Fig. 3c, and a direct reset gate controlled by the output .

Figure 3: The structure of neural units for temporal processing: (a) LSTM drawing adapted from Greff et al.greff_lstm:_2017 ; (b) GRU and (c) SNU drawn based on the same convention. Dashed lines indicate connections with time-lag. Bold lines indicate parametrized connections.
Network structure
Feed-forward Recurrent
Stateless units ANN RNN
Stateful units SNU LSTM/GRU
Table 1: Structural classification of ANN architectures: Classic ANNs are feed-forward networks of stateless units. In RNNs, temporal context is stored in transient recurrent activities between stateless neurons. LSTMs and GRUs extend RNNs by incorporating internal state. Feed-forward networks of SNUs follow the common SNN approach of addressing temporal tasks using the internal state only.

The SNUs can be optionally interconnected through recurrent connections, as is indicated in Fig. 3c. Thus, similar to the Liquid State Machine SNN modelmaass_real-time_2002 and the learning-to-learn architecturebellec_long_2018 , it might be beneficial for certain tasks to extend the SNUs-based networks to include the recurrent connections matrix . However, the most typical architecture with SNUs is feed-forward, which creates a novel category of ANN architectures for temporal processing, as summarized in Tab. 1. Note that processing temporal data without the use of a recurrent neural network structure but rather using only the internal state has long been the standard approach in the SNN community wozniak_learning_2016 ; pantazi_all-memristive_2016 ; maass_computational_2000 ; song_competitive_2000 ; moraitis_fatiguing_2017 ; gutig_learning_2003 ; querlioz_simulation_2011 ; diehl_unsupervised_2015 ; sidler_unsupervised_2017 ; bichler_extraction_2012 ; wozniak_IJCNN_2017 . Thus, in the rest of the paper we will focus on the classic feed-forward SNN network architectures.

The use of feed-forward stateful architectures for temporal problems has a series of profound advantages. From an implementation perspective, all-to-all connectivity between the neuronal outputs and the neuronal inputs within the same layer is not required. This may lead to highly-parallel software implementations or neuromorphic hardware designs. From a theoretical standpoint, owing to inherent temporal neural dynamics, a feed-forward network of SNUs is the simplest temporal neural network architecture with a lower number of parameters than that in RNNs, LSTM- or GRU-based networks, which may result in faster training and reduced overfitting.

In addition to the synaptic weights, the trainable parameters in SNUs may include the membrane time constant and the neuronal threshold. For a layer of neurons with inputs, the number of parameters for an SNU with trainable synaptic weights and firing thresholds, but constant , is , which is equivalent to the number of parameters in the simplest feed-forward ANN. Moreover, as summarized in Tab. 2, even if a trainable time constant is considered, the number of parameters in an SNU architecture is smaller than that of an RNN, an LSTM with four fully-parametrized gates or a GRU with three fully parametrized gates.

Network model # of parameters
ANN
RNN
LSTM
GRU
SNU
SNU with trainable
recurrent SNU
Table 2: Number of parameters for common architectures in a single fully-connected layer with neurons and inputs to a neuron.

Iv Training

SNUs provide a mapping of the spiking neural dynamics into the ANN frameworks, which naturally enables to reuse the existing backpropagation training procedures. However, backpropagation requires that all parts of the network are differentiable, which is the case for the sSNU variant, but not for the step function in the standard SNU. Nevertheless, in particular cases it is possible to train non-differentiable neural networks by providing a pseudo-derivative for the non-differentiable functions bengio_estimating_2013 . In the remaining part of the paper, we follow this approach and use the derivative of as the pseudo-derivative of the step function.

Even though an SNU-based network is a feed-forward architecture, the state within the units is implemented using self-looping recurrent connections. Therefore, to train such deep networks, we follow the idea of using backpropagation through time (BPTT) werbos_generalization_1988 in SNNs huh_gradient_2017 ; bellec_long_2018 . This implies that the SNU structure is unfolded over time, i.e., the computational graph and its parameters are replicated for each time step, as illustrated in Fig. 4, and then the standard backpropagation algorithm is applied. The unfolding involves only the local state of the neuron, which is different from the common RNN architectures that require unfolding of the activations of all units in a layer through recurrent connection matrices . In practice, these details do not matter for the ANN frameworks that generate a computational graph and use automatic differentiation for the training, so that the entire training code is created dynamically.

In the case standard SNUs are used for the output layer of a network, we propose to adapt the learning loss to reflect the differences of how SNNs and ANNs are assessed. For RNNs, LSTMs and GRUs it is quite often that the last output after presentation of the entire sequence, for instance in Fig. 4, is considered. In the case of SNNs, it is common to assess the output spiking rate of the neurons over a range of time, such as the entire output sequence in Fig. 4. To reflect this, we define the SNU spiking rate loss as the mean squared error (MSE) between the rate of the mean spiking output , calculated over an assessment period , and the target firing rate

(9)

In deep learning, normalized target values in the range between 0 and 1 are used by convention. If we normalize the mean spiking rate by the maximum spiking rate of , we obtain the normalized spiking rate loss, or simply the mean output loss for normalized targets

(10)

that does not depend on the sampling time from the SNN discretization, and is also suitable for use with any ANN model.

Figure 4: Training of an SNU: The recurrent connections that are modeling the state are unrolled and trained with backpropagation.

iv.1 Handwritten digit recognition

We evaluated the performance of deep SNU-based networks in comparison to other popular temporal ANN models. Here, we proposed a temporal variation of the MNIST handwritten digit classification task, in which we assumed that the inputs are spikes from an asynchronous camera. We assumed that for each input image pixel belonging to the digit, defined as a pixel having a positive intensity value, the camera sends a spike at a random time instance. Thus, as illustrated in Fig. 5

, the generated spikes convey jittered information about the digits. For repeatability of the results and to limit the regularization effects, the transformation from standard MNIST to the jittered MNIST was performed upfront for the entire dataset using five time steps per digit and a random seed equal to 0, so that the same inputs are presented at each epoch to all models.

Figure 5: Jittered MNIST: The input spikes arrive from an asynchronous camera, conveying jittered images of the digits.

We aimed to develop a training setting similar to the SNN convention querlioz_simulation_2011 ; sidler_unsupervised_2017 , in which the patterns from the dataset form a continuous stream, i.e., the spikes representing the current training digit come directly after the spikes of the preceding digit. Thus, the network operates with a non-zero initial state and has to identify consecutive digits without receiving explicit information when the digit at the input has changed. However, a direct implementation of this approach would require to apply BPTT to a continuous stream of 60000 training digits forming a single training input stream per entire dataset. Instead, to apply BPTT to the individual training digits and benefit from parallel training with batching, we trained the networks on sequences formed by feeding a dummy random digit to initialize the network state first and then consecutively presenting the training digit. The neuronal outputs of all the models were then evaluated during the training digit presentation period using the mean output loss defined in Eq. 10.

We trained 3-, 4- and 7-layer network architectures with 784-250-10, 784-256-256-10 and 784-256-256-256-256-256-10 neurons, respectively. The networks were homogeneous – all the neurons were of the same kind: SNUs, or sSNUs, with trainable , and fixed

; or RNNs with sigmoidal neurons; or LSTMs with sigmoidal activation functions; or GRUs with sigmoidal activation functions. For each epoch, the training set of 60000 MNIST images was presented with a batch size of 15. The learning was performed using Stochastic Gradient Descent (SGD) with learning rate

. To assess the consistency of the results, the networks were executed for 10 different random weight initializations and the mean test accuracy was calculated.

Firstly, we analyzed the impact of the network depth on the performance of an SNN implemented with SNUs. The evolution of the test accuracy is plotted in Fig. 6. A 3-layer SNU-based network achieved mean test accuracy of 96.8%, whereas with a 4-layer architecture the accuracy increased to 97.41%. A further slight improvement was achieved with a 7-layer architecture that increased the test accuracy to 97.46%. These results indicate that deep networks incorporating spiking neural dynamics have high potential in addressing efficiently, and with high accuracy, machine learning tasks that involve temporal data.

Secondly, we compared the performance of the units from the SNU family with the state-of-the-art temporal ANNs. The learning curves of the best performing RNNs, GRU- and LSTM-based networks are depicted in Fig. 7. The best test accuracy was obtained with 3- or 4-layer networks and was similar for all of them. The exact accuracy values and numbers of parameters of the networks are reported in Tab. 3. As can been seen, the 4- or 7-layer SNU-based network architectures have surpassed in test accuracy these state-of-the-art temporal ANN models. Moreover, a soft variant of the SNU in a 4-layer configuration has achieved the highest result of accuracy.

Figure 6: Learning curves for SNNs implemented with SNUs: the increase in depth of the architecture improves the test accuracy.
Figure 7: Learning curves for conventional RNNs, GRU- and LSTM-based networks superimposed on those of 4-layer SNU- and sSNU-based networks.
Network Total # of parameters Mean accuracy Maximum accuracy
GRU 3-layer 0.9694 0.9708
LSTM 4-layer 0.9699 0.9719
RNN 4-layer 0.9708 0.9718
SNU 4-layer 0.9741 0.9754
sSNU 4-layer 0.9796 0.9802
Table 3: Detailed comparison for the jittered MNIST: the number of model parameters and the test accuracy obtained from 10 trials.

iv.2 Polyphonic music prediction

To further validate the performance of SNUs, we considered the task of polyphonic music prediction. We used the dataset of Johann Sebastian Bach’s (JSB) chorales that comprises over seven hours of music in 382 pieces, in the form provided by Boluanger-Lewandowski et al. boulanger-lewandowski_modeling_2012

. The piano notes ranging from A0 to C8 were coded as 88-dimensional binary vectors, in which ones correspond to a note being played. They were sequentially fed into a network, which had to predict at each time step the set of the notes that were to be played in the consecutive time step. The goal was to minimize the negative log-probability of notes’ predictions.

We compared SNU-based networks against the state-of-the-art ANN resultschung_empirical_2014 ; greff_lstm:_2017 obtained with RNNs, GRUs and LSTMs, including a No Input Activation Function (NIAF) variant of an LSTM with , which nevertheless do not surpass the performance of the best task-specific model boulanger-lewandowski_modeling_2012 obtaining the loss of 5.56. The standard ANN assessment architecture comprises: an input layer receiving a set of input notes; followed by a hidden layer of analyzed units; followed by a softmax output layer predicting the next set of notes.

Network # of hidden units # of hidden layer param. Total # of parameters
RNN tanhchung_empirical_2014
GRUchung_empirical_2014
LSTMchung_empirical_2014
LSTMgreff_lstm:_2017
NIAF-LSTMgreff_lstm:_2017
SNU
sSNU
Table 4: Detailed comparison for the JSB dataset: the number of units in the hidden layer and the number of parameters.

We trained a feed-forward network with 150 SNUs, or sSNUs, with trainable , and . The various network architectures and details are summarized in Tab. 4. Learning was performed using SGD for 2000, or 500 epochs, with the learning rate , or , respectively. The parameter adjustments were applied after presentation of each music piece, following the state-of-the-art convention. We executed the learning for 10 different random initializations and reported the mean of the lowest test negative log-probabilities.

An architecture with a single feed-forward layer of spiking SNUs performed better than a standard recurrent layer of units, as illustrated in Fig. 8

. However, it performed worse than more sophisticated neural units for temporal data processing. The results improved when we used soft SNUs that enable to transmit intermediate values and make full use of the output softmax layer. The average negative log-probability of sSNU was lower than for GRUs

chung_empirical_2014 with similar number of parameters, or the average for the best 10% of 200 trials executed for LSTMs and NIAF versions of LSTMsgreff_lstm:_2017 requiring significantly more parameters. However, the best single NIAF-LSTM trial was able to achieve 8.38greff_lstm:_2017 . In our case, the sSNUs performed consistently with a minimum of 8.47 close to the mean of 8.49.

Figure 8: Polyphonic music prediction on JSB test set: The average negative log-likelihood for SNU and sSNU is plotted together with results for other neural units reported in the literature: (a)chung_empirical_2014 , (b)greff_lstm:_2017 .

V Conclusion

For a long time, SNN and ANN research and applications have been developing separately. There has been significant effort to understand the SNN dynamics and to take advantage of their unique capabilities, albeit with limited success compared to the spectacular progress witnessed with ANNs. In this paper, we have tried to unify these neural network architectures by proposing the SNU that incorporates the spiking neural dynamics in a common ANN framework.

The transformation of the spiking model to an SNU allows SNNs to benefit from the advances in the ANN frameworks and also enables direct comparison of the dynamics of the spiking neural units with the state-of-the-art recurrent units. Moreover, with the sSNU variant, we have generalized the neural dynamics to the non-spiking case. Using this methodology, deep networks consisting of SNUs or sSNUs can be efficiently trained with BPTT. The benchmark results demonstrated that a feed-forward sSNU-based network outperforms conventional RNNs, LSTM- or GRU-based networks in temporal tasks. Therefore, the sSNU offers an alternative stateful model with the lowest number of parameters among the existing models for temporal data processing. Even though the results using the SNU-based network demonstrated slightly inferior performance compared to the sSNU-based networks, the binary input-output characteristics of the spiking communication provides an interesting implementation alternative for low power AI applications.

The proposed SNU family opens many new avenues for future work. It enables us to explore the capabilities of biologically-inspired neural models and benefit from their low computational power as well as their simplicity. It also provides an easy approach to training spiking networks that could increase their adoption for practical applications and would enable power-efficient neuromorphic hardware implementations. Finally, the compatibility of the SNU family with the ANN frameworks and models enables the use of existing or forthcoming ANN accelerators for SNN implementation and deployment.

References

References

  • (1) Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 1097–1105 (2012).
  • (2) Szegedy, C. et al. Going deeper with convolutions. In

    2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , 1–9 (2015).
  • (3) He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016).
  • (4) Redmon, J., Divvala, S., Girshick, R. & Farhadi, A. You Only Look Once: Unified, real-time object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016).
  • (5) He, K., Gkioxari, G., Dollár, P. & Girshick, R. Mask R-CNN. In 2017 IEEE International Conference on Computer Vision (ICCV), 2980–2988 (2017).
  • (6) Mnih, V. et al.

    Human-level control through deep reinforcement learning.

    Nature 518, 529–533 (2015).
  • (7) Sutskever, I., Vinyals, O. & Le, Q. V. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, 3104–3112 (2014).
  • (8) Amodei, D. et al. Deep Speech 2: End-to-end speech recognition in English and Mandarin. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, 173–182 (JMLR.org, New York, NY, USA, 2016).
  • (9) Dayan, P. & Abbott, L. F. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (The MIT Press, 2005).
  • (10) Eliasmith, C. How to Build a Brain: A Neural Architecture for Biological Cognition (Oxford University Press, 2013).
  • (11) Gerstner, W., Kistler, W. M., Naud, R. & Paninski, L. Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition (Cambridge University Press, 2014).
  • (12) Maass, W. Networks of spiking neurons: The third generation of neural network models. Neural Networks 10, 1659 – 1671 (1997).
  • (13) Mead, C. Neuromorphic electronic systems. Proceedings of the IEEE 78, 1629–1636 (1990).
  • (14) Meier, K. A mixed-signal universal neuromorphic computing system. In 2015 IEEE International Electron Devices Meeting (IEDM), 4.6.1–4.6.4 (IEEE, Washington, DC, 2015).
  • (15) Benjamin, B. V. et al. Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations. Proceedings of the IEEE 102, 699–716 (2014).
  • (16) Cassidy, A. S. et al. Real-time scalable cortical computing at 46 giga-synaptic OPS/Watt with ~100x speedup in time-to-solution and ~100,000x reduction in energy-to-solution. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 27–38 (IEEE, Piscataway, NJ, USA, 2014).
  • (17) Davies, M. et al. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 82–99 (2018).
  • (18) Kuzum, D., Yu, S. & Philip Wong, H.-S. Synaptic electronics: Materials, devices and applications. Nanotechnology 24, 382001 (2013).
  • (19) Tuma, T., Pantazi, A., Le Gallo, M., Sebastian, A. & Eleftheriou, E. Stochastic phase-change neurons. Nature Nanotechnology 11, 693–699 (2016).
  • (20) Woźniak, S., Tuma, T., Pantazi, A. & Eleftheriou, E. Learning spatio-temporal patterns in the presence of input noise using phase-change memristors. In 2016 IEEE International Symposium on Circuits and Systems (ISCAS), 365–368 (IEEE, 2016).
  • (21) Pantazi, A., Woźniak, S., Tuma, T. & Eleftheriou, E. All-memristive neuromorphic computing with level-tuned neurons. Nanotechnology 27, 355205 (2016).
  • (22) Tuma, T., Le Gallo, M., Sebastian, A. & Eleftheriou, E. Detecting correlations using phase-change neurons and synapses. IEEE Electron Device Letters 37, 1238–1241 (2016).
  • (23) Markram, H. The Blue Brain Project. Nature Reviews Neuroscience 7, 153 (2006).
  • (24) Izhikevich, E. M. & Edelman, G. M. Large-scale model of mammalian thalamocortical systems. Proceedings of the National Academy of Sciences 105, 3593–3598 (2008).
  • (25) Ananthanarayanan, R., Esser, S. K., Simon, H. D. & Modha, D. S. The cat is out of the bag. In Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis, 1–12 (IEEE, 2009).
  • (26) Markram, H. The Human Brain Project. Scientific American 306, 50–55 (2012).
  • (27) Eliasmith, C. et al. A large-scale model of the functioning brain. Science 338, 1202–1205 (2012).
  • (28) Rasmussen, D. & Eliasmith, C. A spiking neural model applied to the study of human performance and cognitive decline on Raven’s Advanced Progressive Matrices. Intelligence 42, 53–82 (2014).
  • (29) Maass, W. On the computational power of Winner-Take-All. Neural Computation 12, 2519–2535 (2000).
  • (30) Maass, W., Natschläger, T. & Markram, H. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural computation 14, 2531–2560 (2002).
  • (31) Markram, H., Lübke, J., Frotscher, M. & Sakmann, B. Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science 275, 213–215 (1997).
  • (32) Song, S., Miller, K. D. & Abbott, L. F. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nature Neuroscience 3, 919–926 (2000).
  • (33) Moraitis, T. et al. Fatiguing STDP: Learning from spike-timing codes in the presence of rate codes. In 2017 International Joint Conference on Neural Networks (IJCNN) (IEEE, 2017).
  • (34) Gütig, R., Aharonov, R., Rotter, S. & Sompolinsky, H. Learning input correlations through nonlinear temporally asymmetric Hebbian plasticity. The Journal of Neuroscience 23, 3697–3714 (2003).
  • (35) Querlioz, D., Bichler, O. & Gamrat, C. Simulation of a memristor-based spiking neural network immune to device variations. In 2011 International Joint Conference on Neural Networks (IJCNN), 1775–1781 (IEEE, 2011).
  • (36) Diehl, P. U. & Cook, M. Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Frontiers in Computational Neuroscience 9 (2015).
  • (37) Sidler, S., Pantazi, A., Woźniak, S., Leblebici, Y. & Eleftheriou, E. Unsupervised learning using phase-change synapses and complementary patterns. In 2017 ENNS International Conference on Artificial Neural Networks (ICANN) (2017).
  • (38) Woźniak, S. Unsupervised Learning of Phase-Change-Based Neuromorphic Systems. Doctoral dissertation, EPFL (2017).
  • (39) Bichler, O., Querlioz, D., Thorpe, S. J., Bourgoin, J.-P. & Gamrat, C. Extraction of temporally correlated features from dynamic vision sensors with spike-timing-dependent plasticity. Neural Networks 32, 339–348 (2012).
  • (40) Burbank, K. S.

    Mirrored STDP implements autoencoder learning in a network of spiking neurons.

    PLOS Computational Biology 11, 1–25 (2015).
  • (41) Woźniak, S., Pantazi, A., Leblebici, Y. & Eleftheriou, E.

    Neuromorphic system with phase-change synapses for pattern learning and feature extraction.

    In 2017 International Joint Conference on Neural Networks (IJCNN) (IEEE, 2017).
  • (42) O’Connor, P., Neil, D., Liu, S.-C., Delbruck, T. & Pfeiffer, M.

    Real-time classification and sensor fusion with a spiking deep belief network.

    Frontiers in Neuroscience 7 (2013).
  • (43) Diehl, P. U., Zarrella, G., Cassidy, A., Pedroni, B. U. & Neftci, E. Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware. In Rebooting Computing (ICRC), IEEE International Conference on, 1–8 (IEEE, 2016).
  • (44) Rueckauer, B. & Liu, S.-C. Conversion of analog to spiking neural networks using sparse temporal coding. In 2018 IEEE International Symposium on Circuits and Systems (ISCAS), 1–5 (IEEE, Florence, Italy, 2018).
  • (45) Bohte, S. M., Kok, J. N. & La Poutré, H. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48, 17–37 (2002).
  • (46) Anwani, N. & Rajendran, B. NormAD - normalized approximate descent based supervised learning rule for spiking neurons. In 2015 International Joint Conference on Neural Networks (IJCNN), 1–8 (IEEE, 2015).
  • (47) Bengio, Y., Mesnard, T., Fischer, A., Zhang, S. & Wu, Y.

    STDP-compatible approximation of backpropagation in an energy-based model.

    Neural Computation 29, 555–577 (2017).
  • (48) Tavanaei, A. & Maida, A. S. BP-STDP: Approximating backpropagation using spike timing dependent plasticity. arXiv:1711.04214 [cs] (2017). ArXiv: 1711.04214.
  • (49) Hunsberger, E. & Eliasmith, C. Spiking deep networks with LIF neurons. arXiv:1510.08829 [cs] (2015). ArXiv: 1510.08829.
  • (50) Esser, S. K. et al. Convolutional networks for fast, energy-efficient neuromorphic computing. Proceedings of the National Academy of Sciences 201604850 (2016).
  • (51) Lee, J. H., Delbruck, T. & Pfeiffer, M. Training deep spiking neural networks using backpropagation. Frontiers in Neuroscience 10 (2016).
  • (52) Huh, D. & Sejnowski, T. J. Gradient descent for spiking neural networks. arXiv preprint arXiv:1706.04698 (2017).
  • (53) Bellec, G., Salaj, D., Subramoney, A., Legenstein, R. & Maass, W. Long short-term memory and learning-to-learn in networks of spiking neurons. arXiv:1803.09574 [cs, q-bio] (2018). ArXiv: 1803.09574.
  • (54) Nair, V. & Hinton, G. E.

    Rectified linear units improve restricted Boltzmann machines.

    In Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML’10, 807–814 (Omnipress, USA, 2010).
  • (55) Waibel, A. Modular construction of Time-Delay Neural Networks for speech recognition. Neural Computation 1, 39–46 (1989).
  • (56) Gers, F. A., Schmidhuber, J. & Cummins, F. Learning to forget: Continual prediction with LSTM. Neural Computation 12, 2451–2471 (1999).
  • (57) Cho, K., van Merrienboer, B., Bahdanau, D. & Bengio, Y.

    On the properties of neural machine translation: Encoder-decoder approaches.

    In Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-8), 2014 (2014).
  • (58) Chung, J., Gulcehre, C., Cho, K. & Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS 2014 Workshop on Deep Learning, December 2014 (2014).
  • (59) http://www.tensorflow.org.
  • (60) http://brian2.readthedocs.io.
  • (61) Greff, K., Srivastava, R. K., Koutník, J., Steunebrink, B. R. & Schmidhuber, J. LSTM: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems 28, 2222–2232 (2017).
  • (62) Bengio, Y., Léonard, N. & Courville, A. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv:1308.3432 [cs] (2013). ArXiv: 1308.3432.
  • (63) Werbos, P. J. Generalization of backpropagation with application to a recurrent gas market model. Neural Networks 1, 339 – 356 (1988).
  • (64) Boulanger-Lewandowski, N., Bengio, Y. & Vincent, P. Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. In Proceedings of the 29th International Coference on International Conference on Machine Learning, ICML’12, 1881–1888 (Omnipress, USA, 2012).