Traditional rate coded artificial neural networks represent an analog variable through the firing rate of the biological neuron. That is, the output of a computational unit is a representation of the firing rate of the biological neuron. In order to increase the computational power of the network the neurons are structured in successive layers of computational units. Such systems are trained to recognise the input patterns by searching for a set of suitable connections weights. Learning rules based on gradient decent, such as backpropagation (Rumelhart et al., 1986)
, have led sigmoidal neural networks (networks that use the sigmoid as the activation function) to be one of the most powerful and flexible computational models.
However, experimental evidence suggests that neural systems use the exact time of single action potentials to encode information (Thorpe & Imbert, 1989; Johansson & Birznieks, 2004). In Thorpe & Imbert (1989) it is argued that because of the speed of processing visual information and the anatomical structure of the visual system, processing has to be done on the basis of single spikes. In Johansson & Birznieks (2004) it is shown that the relative timing of the first spike contains important information about tactile stimuli. Further evidence suggests that the precise temporal firing pattern of groups of neurons conveys relevant sensory information (Wehr & Laurent, 1996; Neuenschwander & Singer, 1996; deCharms & Merzenich, 1996).
These findings have led to a new way of simulating neural networks based on temporal encoding with single spikes (Maass, 1997a). Investigations of the computational power of spiking neurons have illustrated that realistic mathematical models of neurons can arbitrarily approximate any continuous function, and furthermore, it has been demonstrated that networks of spiking neurons are computationally more powerful than sigmoidal neurons (Maass, 1997b). Because of the nature of spiking neuron communication, these are also suited for VSLI implementation with significant speed advantages (Elias & Northmore, 2002).
In this paper, we present a new learning algorithm for feed-forward spiking neural networks with multiple layers. The learning rule extends the ReSuMe algorithm (Ponulak & Kasiński, 2010) to multiple layers using backpropagation of the network error. The weights are updated according to STDP and anti-STDP processes and unlike SpikeProp (Bohte et al., 2002) can be applied to neurons firing multiple spikes in all layers. The multilayer ReSuMe is analogue of the backpropagation learning algorithm for rate neurons, while making use of spiking neurons. To the best of our knowledge this is the first learning algorithm for spiking neuron networks with hidden layers where multiple spikes are considered in all layers, with precise spike time encoding can be read for both inputs and outputs.
The rest of the article is organised as follows: in the following section some of the existing supervised learning algorithms for spiking neurons are discussed. Section 3 contains a description of the generalised spiking neuron model and the derivation of the learning rule based on this neuron model for a feed-forward network with a hidden layer. In section 4 the weight modifications are analysed for a simplified network with a single output neuron. In section 5 the flexibility and power of the feed-forward spiking neural networks trained with multilayer ReSuMe are showcased by non-linear problems and classifications tasks. The spiking neural network is trained with spike timing patterns distributed over timescales in the range of tens to hundreds of milliseconds, comparable to the span of sensory and motor processing (Mauk & Buonomano, 2004). A discussion of the learning algorithm and the results are presented in section 6. The article concludes with a summary of the article.
Whereas experimental studies have shown that supervised learning is present in the brain, especially in sensorimotor networks and sensory systems (Knudsen, 1994, 2002), there are no definite conclusions regarding the means through which biological neurons learn. Several learning algorithms have been proposed to explore how spiking neurons may respond to given instructions.
One such algorithm, the tempotron learning rule, has been introduced by Gütig & Sompolinsky (2006)
, where neurons learn to discriminate between spatiotemporal sequences of spike patterns. Although the learning rule uses a gradient-descent approach, it can only be applied to single-layered networks. The algorithm is used to train leaky integrate-and-fire neurons to distinguish between two classes of patterns by firing at least one action potential or by remaining quiescent. While the spiking neural network is able to successfully classify the spike-timing patterns, the neurons do not learn to respond with precise spike-timing patterns.
Another gradient descent based learning rule is the SpikeProp algorithm (Bohte et al., 2002) and its extensions (Schrauwen & Van Campenhout, 2004; Xin & Embrechts, 2001; Booij & tat Nguyen, 2005; Tiňo & Mills, 2005). The algorithm is applied to feed-forward networks of neurons firing a single spike and is minimizing the time difference between the target spike and the actual output spike time. The learning algorithm uses spiking neurons modelled by the Spike Response Model (Gerstner, 2001), and the derivations of the learning rule are based on the explicit dynamics of the neuron model. Although Booij & tat Nguyen (2005) have extended the algorithm to allow neurons to fire multiple spikes in the input and hidden layers, subsequent spikes in the output layer are still ignored because the network error is represented by the time difference of the first spike of the target and output neurons.
Some supervised learning algorithms are based on Hebb’s postulate - ”cells that fire together, wire together” (Hebb, 1949) - (Ruf & Schmitt, 1997; Legenstein et al., 2005; Ponulak & Kasiński, 2010). ReSuMe (Ponulak & Kasiński, 2010) is making use of both Hebbian learning and gradient descent techniques. As the weight modifications are based only on the input and output spike trains and do not make any explicit assumptions about the neural or synaptic dependencies, the algorithm can be applied to various neuron models. However, the algorithm can only be applied to a single layer of neurons or used to train readouts for reservoir networks.
The ReSuMe algorithm has also been applied to neural networks with a hidden layer, where weights of downstream neurons are subject to multiplicative scaling (Grüning & Sporea, 2011). The simulations show that networks with one hidden layer can perform non-linear logical operations, while networks without hidden layers cannot. The ReSuMe algorithm has also been used to train the output layer in a layer-feedforward network in Glackin et al. (2011) and Wade et al. (2010)
where the hidden layer acted as a frequency filter. However, input and target outputs here consisted of fixed-rate spike trains.
3 Learning algorithm
In this section the new learning algorithm for feed-forward multilayer spiking neural networks is described. The learning rule is derived for networks with only one hidden layer, as the algorithm can be extended to networks with more hidden layers similarly.
3.1 Neuron model
The input and output signals of spiking neurons are represented by the timing of spikes. A spike train is defined as a sequence of impulses fired by a particular neuron at times . Spike trains are formalised by a sum of Dirac delta functions (Gerstner & Kistler, 2002):
In order to analyse the relation between the input and output spike trains, we use the linear Poisson neuron model (Gütig et al., 2003; Kempter et al., 2001). Its instantaneous firing rate is formally defined as the expectation of the spike train, averaged over an infinite number of trials. An estimate of the instantaneous firing rate can be obtained by averaging over a finite number of trials (Heeger, 2001):
where is the number of trials and is the concrete spike train for each trial. In the linear Poisson model the spiking activity of the postsynaptic neuron is defined by the instantaneous rate function:
where is the number of presynaptic neurons . The weights represent the strength of the connection between the presynaptic neurons and postsynaptic neuron . The instantaneous firing rate will be used for the derivation of the learning algorithm due to its smoothness and subsequently be replaced by its discontinuous estimate, the spike train .
3.2 Backpropagation of the network error
The learning algorithm is derived for a fully connected feed-forward network with one hidden layer. The input layer is only setting the input patterns without performing any computation on the patterns. The hidden and output layers are labelled and respectively. All neurons in one layer are connected to all neurons in the subsequent layer.
The instantaneous network error is formally defined in terms of the difference between the actual instantaneous firing rate and the target instantaneous firing rate for all output neurons:
In order to minimise the network error, the weights are modified using a process of gradient descent:
where is the learning rate and represents the weight between the output neuron and hidden neuron . is the weight change contribution due to the error at time , and the total weight change is over the duration of the spike train. This is analogue to the starting point of standard backpropagation for rate neurons in discrete time. For simplicity, the learning rate will be considered and will be suppressed in all following equations, as the step length of each learning iteration will be given by other learning parameters to be defined later on. Also, in the following derivatives are understood in a functional sense.
Weight modifications for the output neurons
In this section we re-derive the weight-update formulate for the ReSuMe learning algorithm and connect with gradient-descent learning for linear Poisson-neurons. We will need this derivation as a first step to a derive our extension of ReSuMe to subsequent layers in the next subsection. However, this derivation is also instructive in its own right as it works out a bit more rigorously than in the original derivation (Ponulak & Kasiński, 2010) how ReSuMe and gradient descent are connected. It also makes Ponulak’s statement more explicit that ReSuMe can be applied to any neuron model. This is then the case if the neural model can on an appropriate time scale be approximated well enough with a linear neuron model.
As the network error is a function of the output spike train, which in turn depends on the weight
, the derivative of the error function can be expanded using the chain rule as follows:
The first term of the right-hand part of equation (6) can be calculated as:
For convenience we define the backpropagated error for the output neuron :
This is similar to standard discrete-time backpropagation, however now derived as a functional derivative in continuous time. In the following we will use the best estimation of the unknown instantaneous firing rate when we only have a single spike train , which is the spike train itself for each of the neurons involved. Thus the weights will be modified according to:
However, products of Dirac functions are mathematically problematic. Following Ponulak & Kasiński (2010) the non-linear product of is substituted with a STDP process. In a similar manner, is substituted with an anti-STDP process (for details see Ponulak & Kasiński (2010)).
where is a non-Hebbian term that guarantees the weight changes in the correct direction if the output spike train contains more or less spikes than the target spike train.
The integration variable represents the time difference between the actual firing time of the output neuron and the firing time of the hidden neuron , and the target firing time and the firing time of the hidden neuron respectively. The kernel gives the weight change if the presynaptic spike (the spike of the hidden neuron occurs) comes after the postsynaptic spike (the spikes of the output and target neurons). The kernel gives the weight change if the presynaptic spike before the postsynaptic spike. The kernels and define the learning window (Gerstner & Kistler, 2002):
where , are the amplitudes and , are the time constants of the learning window. Thus the final learning formula for the weight modifications becomes:
Weight modifications for the hidden neurons
In this section we extend the argument above to weight changes between the input and the hidden layer. The weight modifications for the hidden neurons are calculated in a similar manner in the negative gradient direction:
The derivative of the error is expanded similarly as in equation (6) (again in the sense of functional derivatives):
The first factor of the right-hand part of the above equation is expanded for each output neuron using the chain rule:
The second factor of the right-hand side of the above equation is calculated from equation (3):
The derivatives of the error with respect to the output spike train have already been calculated for the weights to the output neurons in equation (7). By combining these results:
We define the backpropagated error for layers other than the output layer:
Just like in standard backpropagation are backpropagated errors of the neurons in the preceding layer. By substituting the instantaneous firing rates with the spike trains as estimators, equation (23) becomes:
We now want to repeat the procedure of replacing the product of two spike trains (involving -distributions) with a STDP process. We note first that equation (25
) does not depend any longer on any spikes fired or not fired in the hidden layer. While there are neurobiological plasticity processes that can convey information about a transmitted spike from the effected synapses to lateral or downstream synapses (for an overview seeHarris 2008), no direct neurobiological basis is known for an STDP process between a synapse and the outgoing spikes of an upstream neuron. Therefore this substitution is to be seen as a computational analogy, and the weights will be modified according to:
The total weight change is again determined by integrating equation (26) over time. The synaptic weights between the input and hidden neurons are modified according to STDP processes between the input and target spikes and anti-STDP processes between input and output spikes.
The normalisation to the number of presynaptic connections of the modifications of the weights to the output neurons ensures that the changes are proportional to the number of weights. Moreover, the learning parameters do not need to change as the network architecture changes (for example, in order to keep the firing rate of postsynaptic neurons constant as the number of presynaptic units changes, the initial weights and weight modifications also must change accordingly). The normalisation to the number of presynaptic and postsynaptic connections of the weight modifications to the hidden neurons ensures that the changes of the connections between the input and hidden layer are usually smaller than the changes of the connections between the hidden and output layer, which keeps the learning process stable.
The algorithm can be generalised in this manner for neural networks with multiple hidden layers. The learning rule could also be generalised for recurrent connections (e.g. using unrolling in time as in backpropagation through time (Rojas, 1996)), however in the present paper we only consider feed-forward connections. This is our extension of ReSuMe to hidden layers following from error minimisation and gradient descent.
As the learning rule for the weight modifications depends only on the presynaptic and postsynaptic spike trains and the current strength of the connections between the spiking neurons, the algorithm can be applied to various spiking neuron models, as long as the model can be sufficiently well approximated on an appropriate time scale as in equation (2). Although Ponulak & Kasiński (2010) do not explicitly use any neuron model for the derivation of the ReSuMe algorithm, implicitly a linear neuron model is assumed. The algorithm has successfully been applied to leaky integrate-and-fire neurons, Hodgkin-Huxley, and Izhikevich neuron models (Ponulak & Kasiński, 2010). Since the present learning rule is an extension of ReSuMe to neural networks with multiple layers, this is an indication that this algorithm will function with similar neuron models, as we demonstrate in the following section.
Inhibitory connections are represented by negative weights which are updated in the same manner as positive weights. However, for the calculation of the backpropagation error of the hidden neurons in equation (31), the absolute value of the output weights will be used. This is a deviation from the gradient descent rule, but using the absolute values guarantees that the weights between the input and hidden neurons are always modified in the same direction as between hidden and output neurons:
Preliminary simulations have shown this results in better convergence of the learning algorithm. There is also neurobiological evidence that LTD and LTP spread to downstream synapses (Tao et al., 2000; Fitzsimonds et al., 1997), i.e. that weight changes with the same direction propagation from upstream to downstream neurons.
If one considers a network architecture where all the neurons in one layer are connected to all neurons in the subsequent layer through multiple sub-connections with different delays , where each sub-connection has a different weight (Bohte et al., 2002), the learning rule for the weight modifications for the output neurons will become:
where is the weight between output neuron and hidden neuron delayed by ms. The backpropagated error for the output is then:
where is the number of sub-connections. The learning rule for the weight modifications for any hidden layer is derived similarly as:
where is the backpropagated error calculated over all possible backward paths (from all output neurons through all delayed sub-connections):
The algorithm can be generalised for neural networks with multiple hidden layers and delays similarly.
3.3 Synaptic scaling
There has been extensive evidence that suggests that spike-timing dependent plasticity is not the only form of plasticity (Watt & Desai, 2010). Another plasticity mechanism used to stabilise the neurons activity is synaptic scaling (Shepard et al., 2009). Synaptic scaling regulates the strength of synapses in order to keep the neuron’s firing rate within a particular range. The synaptic weights are scaled multiplicatively, this way maintaining the relative differences in strength between any inputs (Watt & Desai, 2010).
In our network, in addition to the learning rule described above, the weights are also modified according to synaptic scaling in order to keep the postsynaptic neuron firing rate within an optimal range . If a weight from neuron to neuron causes the postsynaptic neuron to fire with a rate outside the optimal range, the weights are scaled according to the following formula (Grüning & Sporea, 2011):
where the scaling factor for , and for .
Synaptic scaling solves the problem of optimal weight initialisation. It was observed that the initial values of the weights have a significant influence on the learning process, as too large or too low values may result in failure of the learning (Bohte et al., 2002). Preliminary experiments show that a feed-forward network can still learn reliably simple spike trains without synaptic scaling as long as the weights are initialised within an optimal range. However, as the target patterns contain more spikes, finding the optimal initial values for the weights becomes difficult. Moreover, as the firing rate of the target neurons increases, it becomes harder to maintain the output neurons firing rate within the target range without using minimal learning steps. The introduction of synaptic scaling solves the problem of weights initialisation as well as speeds up the learning process.
4 Heuristic discussion of the learning rule
In order to analyse the direction in which the weights change during the learning process using equations (16) and (27), we will consider a simple three layer network. The output layer consists of a single neuron. The neurons are connected through a single sub-connection with no delay. For clarity, in this section spike trains will comprise only a single spike. Let and denote the desired and actual spike time of output neuron , and and the spikes times of the hidden neuron and input neuron respectively. Also, for simplicity, synaptic scaling will not be considered here.
We discuss this case only in the following and note the case (i.e. post-before-pre) can be discussed along the same lines with above replaced by .
We discussed the following cases:
The output neuron fires a spike at time before the target firing time ().
Weight modifications for the synapses between the output and hidden neurons. The weights are modified according to . Since then in equation (33). This results in , and thus in a decrease of this weight. If the connection is an excitatory one, the connection becomes less excitatory, increasing the likelihood, that the output neuron fires later during the next iteration, hence minimising the difference between the actual output and the target firing time. If the connection is inhibitory, the connection will become stronger inhibitory, resulting in a later firing of the output neuron as well (see also Ponulak (2006)).
Weight modifications for the synapses between the hidden and input neurons. The weights to the hidden neurons are modified according to: .
. By an analogue reasoning to the case above , and hence the connection will become less excitatory and more inhibitory, again making the hidden neuron fire a bit later, and hence making it more likely that also the output neuron fires later as the connection from hidden to output layer is excitatory.
. For the hidden neuron the effect stays the same, hence it will fire later. As it is now more likely to fire later, its inhibitory effect will come to bear on the output neurons also a bit later.
Cases where there is only an actual spike at and no desired spike or where there is only a desired spike at can be dealt with under the above cases if one sets or respectively. In addition there will be a contribution from the factor in equations (17) and (27), and this has the same sign as the one from (33) and (34).
In this section several experiments are presented to illustrate the learning capabilities of the algorithm. The algorithm is applied to classic benchmarks, the XOR problem and the Iris data set, as well as to classification tasks with randomly generated patterns. The XOR problem is applied using two different encoding methods to demonstrate the flexibility of our learning algorithm. The learning rule is also applied to classification problems of spike timing patterns which range from 100 ms to 500 ms in order to simulate sensory and motor processing in biological systems.
The network used for the following simulations is a feed-forward architecture with three layers. The neurons are described by the Spike Response Model (Gerstner, 2001) (see the appendix for a complete description).
For all simulations, an iteration consists of presenting all spike timing pattern pairs in random order. The membrane potential of all neurons in the hidden and output layers is set to the resting potential (set to zero) when presenting a new input pattern. After each presentation of the input pattern to the network, the weight modifications are computed for all layers and then applied. We apply the weight changes after the backpropagated error is computed for all units in the network. The summed network error is calculated for all patterns and tested against a required minimum value, depending on the experiment (see the appendix for details on the network error). This minimum value is chosen in order to guarantee that the network has learnt to correctly classify all the patterns with an acceptable precision.
The results are averaged over a large number of trials (50 trials unless stated otherwise), with the network being initialised with a new set of random weights every trial. On each testing trial the learning algorithm is applied for a maximum of 2000 iterations or until the network error has reached the minimum value.
The learning is considered converged if the network error has reached a minimum value, depending on the experiment. Additional constrains for the convergence of the learning algorithm are considered in Sections 5.3 to 5.5 in order to ensure the network has learnt to correctly classify all the patterns. For all simulations, the averaged number of iterations needed for convergence is calculated over the successful trials. The accuracy rate is defined as the percentage of correctly classified patterns calculated over the successful trials.
Unless stated otherwise, the network parameters used in these simulations are: the threshold , the time constant of the spike response function ms, the time constant of after-potential kernel ms. The scaling factor is set to . The learning parameters are initialised as follows: , , ms, .
The weights were initialised with random values uniformly distributed between -0.2 and 0.8. The weights are then normalised by dividing them to the total number of sub-connections.
5.1 The XOR benchmark
In order to demonstrate and analyse the new learning rule, the algorithm is applied to the XOR problem. While this benchmark does not require generalising, the XOR logic gate is a non-linear problem and it is a classical benchmark for testing the learning algorithm’s ability to train non-trivial input output transformations (Rojas, 1996).
The input and output patterns are encoded using spike-time patterns as in Bohte et al. (2002). The signals are associated with single spikes as follows: a binary symbol ”0” is associated with a late firing (a spike at 6 ms for the input pattern) and a ”1” is associated with an early firing (a spike at 0 ms for the input pattern). We also used a third input neuron that designates the reference start time as this encoding needs an absolute reference start time to determine the latency of the firing (Sporea & Grüning, 2011). Without a reference start time, two of the input patterns become identical and without an absolute reference time, the network is unable to distinguish the two patterns (0-0 and 6-6) and would always respond with a delayed output. Table 1 shows the input and target spike timing patterns that are presented to the network. The values represent the times of the spikes for each input and target neuron in ms of simulated time.
|Input [ms]||Output [ms]|
The learning algorithm was applied to a feed-forward network as described above. The input layer is composed of three neurons, the hidden layer contains five spiking neurons, and the output layer contains only one neuron. Multiple sub-connections with different delays were used for each connection in the spiking neural network. Preliminary experiments showed that 12 sub-connections with delays from 0 ms to 11 ms are sufficient to learn the XOR problem. The results are averaged over 100 trials. The network error is summed over all pattern pairs, with a minimum value for convergence of 0.2. The minimum value is chosen to ensure that the network has learnt to classify all patterns correctly, by matching the exact number of spikes of the target spike train as well as the timing of the spikes with 1 ms precision. Each spiking neuron in the network was simulated for a time window of 30 ms, with a time step of 0.1 ms. In the following we systematically vary the parameters of the learning algorithm and examine their effects.
The learning parameters
The parameters and play the role of a learning rate. Just like the classic back-propagation algorithm for rate neurons, when the learning parameters have higher values the number of iterations needed for convergence is lower. In order to determine the best ratio between the two learning parameters, various values are chosen for , while keeping fixed. The results are summarised in Table 2b.
The learning algorithm is able to converge for the values of lower than . As becomes equal or higher than , the convergence rate slowly decreases and the number of iterations needed for convergence significantly rises. The lowest average number of iterations with a high convergence rate is 137 averaged over 98% successful trials.
Number of sub-connections
The algorithm also converges when the spiking neural network has a smaller number of sub-connections. However, a lower number of delayed sub-connections results in a lower convergence rate without necessarily a lower average of learning iterations for the successful trials. Although more sub-connections can produce a more stable learning process, due to the larger number of weights that need to be coordinated, the learning process is slower in this case. Table 3 shows the summarised results, where and .
|connections||trials [%]||of iterations|
Analysis of learning process
In order to analyse the learning process, the network error and the weight vector during the learning process can be seen in Figure1 (, , and 12 sub-connections). Figure 1a shows the evolution of the summed network error during learning. Figure 1b shows the Euclidean distance between the weight vector solution found on a trial and the weight vectors during each learning iteration that led to this weight vector. The weight vectors are tested against the solution found because there can be multiple weight vectors solutions. While the error graph is irregular, the weight vector graph shows that the weight vector moves steadily towards the solution. The irregularity of the network error during the learning process can be explained by the fact that small changes to the weights can produce an additional or missing output spike, which causes significant changes in the network error. The highest error value corresponds to the network not firing any spike for any of the four input patterns. The error graph also shows the learning rule ability to modify the weights in order to produce the correct number of output spikes.
5.2 The Iris benchmark
Another classic benchmark of pattern recognition is Fisher’s Iris flower data set(Fisher, 1936). The data set contains three classes of Iris flowers. While one of the classes is linearly separable from the other two, the other two classes are not linearly separable from each other.
The three species are completely described by four measurements of the plants: the lengths and wights of the petal and sepal. Each measurement has associated an input neuron and the input pattern consists of the timing of a single spike. The measurements of the Iris flower range from 0 to 8 and are fed into the spiking neural network as spike timing patterns to the input neurons. The output of the network is represented by the spike-time of the output neuron, as seen in Table 4. The hidden layer contains ten spiking neurons and each connection has between 8 and 12 delayed sub-connections depending on the experiment.
|Species||Output spike-time [ms]|
During each trial, the input patterns are randomly divided into a training set (75% of samples) and a testing set (25% of samples) for cross validation. During each iteration, the training set is used for the learning process to calculate the weight modifications and to test if the network has learnt the patterns. The learning is considered successful if the network error has reach a minimum average value of 0.2 for each pattern pair and 95% of the patterns in the training set are correctly classified. As in the previous experiment, this minimum value is chosen to ensures that the network has learnt to classify all patterns correctly, by matching the exact number of spikes of the target spike train as well as timing of the spikes with 1 ms precision. Table 5 shows the summarised results on the Iris data set for different network architectures with different numbers of delayed sub-connections.
|Sub-||Successful||Average number||Accuracy on the||Accuracy on the|
|connections||trials||of iterations||training set [%]||testing set [%]|
Multi-layer ReSuMe permits the spiking neural network to learn the Iris data set using a straight forward encoding of the patterns and results in much faster learning than SpikeProp, as the average number of iterations is always lower than 200, as opposed to the population coding based on arrays of receptive fields that requires 1000 iterations for learning (Bohte et al., 2002).
5.3 Non-linear spike train pattern classification
In this experiment the learning algorithm is tested on non-linear transformation of sequences of spikes. Again, the XOR problem is applied to a network of spiking neurons, but the logic patterns are encoded by spike trains over a group of neurons, and not single spikes (see alsoGrüning & Sporea (2011)).
While the encoding for the XOR logic gate problem introduced by Bohte et al. (2002) requires neurons to fire a single spike, the network of spiking neurons needs a large number of sub-connections with different delays to enable the hidden and output neurons to fire at the desired times. As the problem becomes more complex such encoding might need even more sub-connections which have to be trained. The large number of weights to be trained slows down the learning process because of the large number of incoming spikes that need to be coordinated to produce the requires output. This can also be seen in the previous simulations on the XOR problem where the network with 14 terminals, although learning the patterns it needed almost twice as many iterations to converge as the network with 12 terminals. Moreover, it has been shown that encoding logical true and false with early and late spike times respectively also requires an additional input neuron to designate the reference start time. Without the additional input neuron, even linear problems become impossible to solve (for a complete demonstration, see Sporea & Grüning (2011)).
A more natural encoding would consist of the temporal firing patterns of groups of neurons (Wehr & Laurent, 1996; Neuenschwander & Singer, 1996; deCharms & Merzenich, 1996). In order to test such an encoding and the learning algorithm’s ability to learn non-linear patterns, the XOR problem is applied once again to a spiking neural network. In this experiment the two logical values will be encoded with spike trains over two groups of input neurons. This encoding will not necessitate multiple delays nor the additional input neuron. In all the following experiments, a single connection with no delay will be used.
Each input logical value is associated with the spike trains of a group of 20 spiking neurons. In order to ensure some dissimilarity between the patterns, for each input neuron a spike train is generated by a pseudo Poisson process with a constant firing rate of within a 30 ms time window. The minimum inter spike interval is set to 3 ms. This spike train is then split in two new spike trains by randomly distributing all the spikes (Grüning & Sporea, 2011). The newly created spike trains will represent the patterns for the logical symbols ”0” and ”1”. The input spike trains are required to consist of at least one spike.
The output patterns are created similarly and will be produced by one output neuron. The spike train to be split is generated by a pseudo Poisson process with a constant firing rate of within a 30 ms period of time. The resulting output patterns are chosen so that the spike trains contain exactly three spike.
Apart from the minimal network error as before, an additional stopping criterion for the learning process is introduced. The network must correctly classify all four patterns. An input pattern is considered correctly classified if the output spike train is closest to the target pattern in terms of the van Rossum distance. The network error consist of the sum of van Rossum distances between the target and actual output over the four patterns as before; a minimum value of 3 ensures that the output spikes are reproduced with an acceptable precision.
In addition to the previous experiments, an absolute refractory period is set for all neurons to ms. The learning is simulated over a period of 50 ms, with a time step of 0.5 ms.
In order to determine the optimal size of the hidden layer for a higher convergence rate, different network topologies have been considered. Table 6 shows the convergence rate for each network topology, with a new set of spike-timing patterns being generated every trial.
|neurons||trials [%]||of iterations|
The learning rule is able to converge with a higher rate as the number of neurons in the hidden layer increases; a larger hidden layer means that the patterns are distributed over a wider spiking activity and easier to be classified by the output neuron. A smaller number of neurons in the hidden layer than in the input layer does not result in high convergence rate because the input patterns are not sufficiently distributed in the hidden activity. Also, more than 100 units in the hidden layer does not result in higher convergence rates, but as the number of weights also increases the learning process is slower. Previous simulations (Grüning & Sporea, 2011) show that a neural network without a hidden layer cannot learn non-linear logical operations.
5.4 Learning sequences of temporal patterns
In this experiment, we consider the learning algorithm’s ability to train a spiking neural network with multiple input-target pattern pairs. The network is trained with random non-noisy spike train patterns and tested against noisy versions of the temporal patterns.
The input patterns are generated by a pseudo Poisson process with a constant firing rate of within a 100 ms period of time, where the spike trains are chosen so that they contain at least one spike. In order to ensure that a solution exists, the target patterns are generated as the output of a spiking neural network initialised with a random set of weights. The target spike trains are chosen so they contain at least two spikes and no more than four spikes. If the output patterns were random spike trains, a solution might not be representable in the weight space of the network (Legenstein et al., 2005).
The learning is considered to have converged if the network error reaches an average value of 0.5 for each pattern pair. Apart from the minimum error, the network must also correctly classify at least 90% of the pattern pairs, where the patterns are classified according the van Rossum distance (see the appendix for details).
The size of the hidden layer
In order to determine how the structure of the neural network influences the number of patterns that can be learnt, different architectures have been tested. In these simulations, 100 input neurons are considered in order to have a distributed firing activity for the simulated time period. The output layer contains a single neuron as in the previous simulations. The size of the hidden layer is varied from 200 to 300 neurons to determine the optimal size for storing 10 input-output pattern pairs. The results are summarised in Table 7a. The network is able to perform better as the number of hidden neurons increases. However, a hidden layer with more 260 neurons does not result in a higher convergence rate.
Number of patterns
The networks architecture that performed best with the lowest number of neurons (260 neurons in the hidden layer) was trained with different numbers of patterns. The results for different number of patterns are summarised in Table 7b. The network is able to store more patterns, but the convergence rate drops as the number of patterns increases. Because the target patterns are the output spike trains of a randomly initialised spiking neural network, as the number of pattern pairs increases, the target spike trains become necessarily more similar. Hence, the network’s responses to the input patterns become more similar and more easily misclassified. Since the stopping criterion requires the network to correctly classify the input patterns, the convergence rate drops as the number of pattern pairs increases.
Since the target patterns are generated as the output spike trains of a network with a set of random weights, this vector of weights can be considered the solution of the learning process. However when looking at the Euclidean distance between the weight vector solution and the weight vectors during learning, the distance is increasing as the learning process progresses. The learning algorithm does not find the same weight vector as the solution, so multiple solutions of weight vectors to the same problem exist (for example permutations of hidden neurons is the simplest one).
After the learning has converged, the networks are also tested against noisy patterns. The noisy patterns are generated by moving each spike within a gaussian distribution with mean 0 and standard deviation between 1 and 10 ms. After the network has learnt all patterns, the network is tested with a random set of 500 noisy patterns. Figure2a shows the accuracy rate (the percentage of input patterns that are correctly classified) for the network with 260 spiking neurons in the hidden layer trained with 10 pattern pairs. The accuracy rates are similar for all the networks described above. The network is able to recognise more than 20% (above the random performance level of 10%) of the patterns when these are distorted with 10 ms.
5.5 Learning to Generalise
In this experiment, the learning algorithm is tested in the presence of noise. In the previous experiments where random patterns were randomly generated, the learning occurred in noise free conditions. A spiking neural network is trained to recognise temporal patterns on the timescale of hundreds of milliseconds. Jitters of spike times are introduced in the temporal patterns during learning to test the network’s ability to classify time varying patterns. Such experiments have been conducted with liquid state machines where readout neurons have been trained with ReSuMe to respond with associated spike trains (Ponulak & Kasiński, 2010). In this paper, we show that such classification tasks can be achieved with feed-forward networks without the need of larger networks such as reservoirs.
Three random patterns are fed into the network through 100 input spiking neurons. The hidden layer contains 210 neurons and the patterns are classified by a single output neuron. The input patterns are generated by a pseudo Poisson process with a constant firing rate of within a 500 ms time period, where the spike trains are chosen so that they contain between 15 and 20 spikes. For the spike train generation an inter spike interval is set to 5 ms. As in the previous experiment, in order to ensure that a solution exists, the target patterns are generated as the output of a spiking neural networks initialised with a random set of weights. The target spike trains are chosen so that they contain at least five spikes and no more than seven spikes. The input and target patterns are distributed over such large periods of time in order to simulate complex forms of temporal processing, such as speech recognition, that spans over hundreds of milliseconds (Mauk & Buonomano, 2004).
During learning, for each iteration noisy versions of the input patterns are generated by moving each spike by a time interval within a gaussian distribution with mean 0 and standard deviation varying in the range of 1 to 4 ms. The spikes in the target patterns are also shifted by a time interval within a gaussian distribution with mean 0 and standard deviation 1 ms independent of the noise level in the input patterns.
A minimum average error of 0.6 for each pattern pair is required for the learning to be considered successful. During each iteration, the network is tested against a new set of 30 random noisy patterns; in order for the learning to be considered converged the network must also correctly classify at least 80% of noisy patterns. The spike times of the testing patterns are shifted with the same distribution as the training patterns.
Table 8 shows the convergence rate for each experiment, where the average number of iterations is calculated over the successful trials. The table also shows the number of successful trials when the network is trained on non-noisy patterns. When the network is trained with a low amount of noise in the input patterns, the learning algorithm performs slightly better than the network trained with patterns without noise. The network is able to learn even when the spike train patterns are distorted with 3 or 4 ms.
|Inputs jitter||Successful||Average number|
|during learning||trials [%]||of iterations|
b shows the accuracy rates on a trained network against a random set of 150 different noisy patterns, generated from the three original input patterns. The network is trained on input patterns where the spikes are moved within a gaussian distribution with mean 0 and variance 4 ms. The graph shows the accuracy rates on patterns with the spikes moved within a gaussian distribution with mean 0 and variance between 1 and 10 ms. The graph also shows the network response on the non-noisy patterns. The accuracy rates are similar for all input pattern jitter. The network is able to recognise more than 50% (again above the random performance level of 33%) of the input patterns even when these are distorted with up to 10 ms.
The multilayer ReSuMe permits training spiking neural networks with hidden layers which brings additional computational power. On one hand, the ReSuMe learning rule applied on a single layer (Ponulak & Kasiński, 2010) with 12 to 16 delays for each connection is not able to learn the XOR problem with the early and late timing patterns (see Section 5.1). Although the algorithm is able to change the weights in the correct direction, the network never responds with the correct output for all four input patterns. The additional hidden layer permits the network to learn the XOR problem (see Section 5.1). On the other hand, a spiking neural network with the same number of units in each layer, but with 16 sub-connections trained with SpikeProp on the XOR patterns needs 250 iterations to converge (Bohte et al., 2002). Our simulations (not presented in this paper) with a similar setup to the experiments in 5.1 confirm this result. Furthermore, SpikeProp requires 16 delayed sub-connections instead of just 12, hence, also implies more weights changes need to be computed. Also, SpikeProp only matches the time of the first target spike, ignoring any subsequent spikes; unlike SpikeProp, our learning algorithm also matches the exact number of output spikes.
Moreover, studies on SpikeProp show that the algorithm is unstable affecting the performance of the learning process (Takase et al., 2009; Fujita et al., 2008). Our learning algorithm is based on weight modifications that only depend on the timing of pattern pairs and not the specific neuron dynamics, therefore is more stable than SpikeProp (see Figure 1). This can be seen in the direct comparison on the XOR benchmark. Although our algorithm also matches the exact number of spikes as well as the precise timing of the target pattern, the network learns all the patterns faster.
The learning algorithm presented here permits using different encoding methods with temporal patterns. In section 5.2 the Iris data set is encoded using four input neurons, instead of 50 neurons required by a population encoding (Bohte et al., 2002). The simpler encoding of the Iris flower dimensions allows the network to learn the patterns in 5 times less iterations than with a population encoding used with SpikeProp (Bohte et al., 2002).
When moving from rate coded neurons to spiking neurons, an important question about the encoding of patterns arises. One encoding was proposed by Bohte et al. (2002), where logical 0 and 1 are associated with the timing of early and late spikes respectively. As the input neuron’s activity is very sparse, the spikes must be multiplied over the simulated time period, as it is known that ReSuMe performs better with more inputs (Ponulak & Kasiński, 2010). This is achieved by having multiple sub-connections for each input neuron that replicates the action potential with a different delay. The additional sub-connections, each with a different synaptic strength, require additional training. This encoding also requires an additional input neuron to set the reference start time (Sporea & Grüning, 2011). Moreover, when looking at the weights after the learning process, only some of the delayed sub-connections have a major contribution to the postsynaptic neuron while others have relatively much smaller absolute values.
The alternative to this encoding is to associate the patterns with spike trains. In order to guarantee that a set of weights exist for any random target transformation without replicating the input signals, a relatively large number of input neurons must be considered. As the input pattern is distributed over several spike trains, some of the information might be redundant and would not have a major contribution to the output. Moreover, such an encoding does not require an additional input neuron to designate the reference start time, as the patterns are encoded in the relative timing of the spikes. The experiment in section 5.3 shows that this encoding can be successfully used for non-linear pattern transformations.
In the classification task in section 5.4, where the network is trained on 10 spike-timing pattern pairs, the learning algorithm converges with a higher rate as the hidden layer increases in size. SpikeProp can also be applied to multilayer feed-forward networks but this algorithm is limited to neurons firing a single spike (Bohte et al., 2002).
The simulations performed on classification tasks where noise was added to the spike-timing patterns show that the learning is robust to the variability of spike timing. A spiking neural network trained on non-noisy patterns can recognise more than 50% of noisy patterns if the timing of spikes is shifted with a gaussian distribution with variance up to 4 ms (see Figure 2a), when the network is trained on noisy patterns, it can recognise more than 50% of noisy patterns where the timing of spikes is moved within a gaussian distributions with variance 10 ms (see Figure 2b).
Another advantage of the learning rule is the introduction of synaptic scaling. Firstly, it solves the problem of finding the optimal range for weight initialisation. This problem is acknowledged as critical for the convergence of the learning (Bohte et al., 2002). Secondly, synaptic scaling maintains the firing activity of neurons in the hidden and output layer within an optimal range during the learning process. Although the firing rate of the output and hidden neurons is also adjusted by the non-correlative term in equations (16) and (27), this is done only when the output firing rate does not match exactly the target firing rate. This can cause hidden neurons to become quiescent (neurons that do not fire any spike) during the learning process and not to contribute to the activity of the output neurons. Synaptic scaling eliminates this kind of problems by setting a minimum firing rate of one spike.
This paper introduces a new algorithm for feed-forward spiking neural networks. The first supervised learning algorithm for feed-forward spiking neural networks, SpikeProp, only considers the first spike of each neuron ignoring all subsequent spikes (Bohte et al., 2002). An extension of SpikeProp allows multiple spikes in the input and hidden layer, but not in the output layer (Booij & tat Nguyen, 2005). Our learning rule is, to the best of our knowledge, the first fully supervised algorithm that considers multiple spikes in all layers of the network. Although ReSuMe allows multiple spikes, the algorithm can only be applied to single layer networks or to train readout neurons in liquid state machines (Ponulak & Kasiński, 2010). The computational power added by the hidden layer permits the networks to learn non-linear problems and complex classification tasks without using a large number of spiking neurons as liquid state machines do, or without the need of a large number of input neurons in a two layered network. Because the learning rule presented here extends the ReSuMe algorithm to multiple layers, it can in principle be applied to any neuron model, as the weight modification rules only depend on the input, output and target spike trains and does not depend on the specific dynamics of the neuron model.
Appendix: Details of simulations
The computing units of the feed-forward network used in all simulations are described by the Spike Response Model (SRM) (Gerstner, 2001). SRM considers the spiking neuron as a homogeneous unit that fires an action potential, or a spike, when the total excitation reaches a certain threshold, . The neuron is characterised by a single variable, the membrane potential, at time .
The emission of an action potential can be described by a threshold process as follows. The spike is triggered if the membrane potential of neuron reaches the threshold at time :
In the case of a single neuron receiving input from a set of presynaptic neurons , the state of the neuron is described as follows:
where is the spike response function of the presynaptic neuron , and is the weight between neurons and ; is the last firing time of neuron . The kernel includes the form of the action potential as well as the after-potential:
where is the membrane time constant, with for .
The unweighted contribution of a single synaptic to the membrane potential is given by:
with is the spike response function with for . The times represent the firing times of neuron . In our case the spike response function describes a standard post-synaptic potential:
where models the membrane potential time constant and determines rise and decay of the function.
The network error for one pattern is defined in terms of the van Rossum distance between each output spike train and each target spike train (van Rossum, 2001). The error between the target spike train and the actual spike train is defined as the Euclidean distance of the two filtered spike trains (van Rossum, 2001). The filtered spike train is determined by an exponential function associated with the spike train:
where are the times of the spikes, and is the Heaviside function. is the time constant of the exponential function. is chosen to be appropriate to the inter spike interval of the output neurons (van Rossum, 2001). In the following simulations the output neurons are required to fire approximately one spike in 10 ms, thus ms. The distance between two spike trains is the squared Euclidean distance between these two functions:
where the distance is calculated over a time domain that covers all the spikes in the system. The van Rossum distance is also used to determine the output pattern during learning and testing. The output pattern is determined as the closest to one of the target patterns in terms of the van Rossum distance.
- Bohte et al. (2002) Bohte, S., Kok, J., & Poutré, H.L. (2002). Error backpropagation in temporally encoded networks of spiking neurons. Neurocomputing, 48, 17 – 37.
- Booij & tat Nguyen (2005) Booij, O. & tat Nguyen, H. (2005). A gradient descent rule for spiking neurons emitting multiple spikes. Information Processing Letters, 95(6), 552 – 558.
- deCharms & Merzenich (1996) deCharms, R.C. & Merzenich, M.M. (1996). Primary cortical representation of sounds by the coordination of action-potential timing. Nature, 381, 610 - 613.
- Elias & Northmore (2002) Elias, J.G. & Northmore, D.P.M. (2002). Building silicon nervous systems with dendritic tree neuromorphs. In Maass, W., Bishop, C.M. (Eds.), Pulsed Neural Networks, MIT Press, Cambridge.
- Fisher (1936) Fisher, R.A. (1936). The Use of Multiple Measurements in Taxonomic Problems. Annals of Eugenics, 7(2), 179 - 188.
- Fitzsimonds et al. (1997) Fitzsimonds, R.M., Sonf, H. & Poo, M. (1997). Propagation of activity dependent synaptic depression in simple neural networks. Nature, 388, 439 – 448.
- Fujita et al. (2008) Fujita, M., Takase, H., Kita, H., & Hayashi, T. (2008). Shape of error surfaces in SpikeProp. Proceedings of IEEE International Joint Conference Neural Networks, IJCNN08, 840 – 844.
- Gerstner (2001) Gerstner, W. (2001). A Framework for Spiking Neuron Models: The Spike Response Model. In: Moss, F., Gielen, S. (Eds.), The Handbook of Biological Physics, Vol.4 (12), 469 – 516.
- Gerstner & Kistler (2002) Gerstner, W. & Kistler, W.M. (2002). Spiking Neuron Models. Single Neurons, Populations, Plasticity, Cambridge University Press, Cambridge.
- Glackin et al. (2011) Glackin, C., Maguire, L., McDaid, L. & Sayers, H. (2011). Respective field optimisation and supervision of a fuzzy spiking neural network. Neural Networks, 24, 247 – 256.
- Grüning & Sporea (2011) Grüning, A. & Sporea, I. (2011). Supervised Learning of Logical Operations in Layered Spiking Neural Networks with Spike Train Encoding. Submitted for publication. Preprint available online at: http://arxiv.org/abs/1112.0213.
- Gütig et al. (2003) Gütig, R., Aharonov, R., Rotter, S., & Sompolinsky, H. (2003) Learning Input Correlations through Nonlinear Temporally Asymmetric Hebbian Plasticity. The Journal of Neuroscience, 23(9), 3697 – 3714.
- Gütig & Sompolinsky (2006) Gütig, R. & Sompolinsky, H. (2006). The tempotron: a neuron that learns spike timing-based decisions. Nature Neuroscience, 9(3), 420 – 428.
- Harris (2008) Harris, K.D. (2008). Stability of the fittest: organizing learning through retroaxonal signals. Trends in Neuroscience, 31(3), 130 – 136.
- Hebb (1949) Hebb, D.O. (1949). The organization of behavior, Wiley, New York.
- Heeger (2001) Heeger, D. (2001). Poisson Model of Spike Generation. Available online at: www.cns.nyu.edu/ david/handouts/poisson.pdf.
- Johansson & Birznieks (2004) Johansson, R.S. & Birznieks, I. (2004). First spikes in ensembles of human tactile afferents code complex spatial fingertip events. Nature Neuroscience, 7, 170 – 177.
- Legenstein et al. (2005) Legenstein, R., Naeger, C., & Maass, W. (2005). What can a neuron learn with spike-timing-dependent plasticity? Neural Computation, 17(11), 2337 – 2382.
- Kempter et al. (2001) Kempter, R. , Gerstner, W., & Van Hemmen, J.L. (2001). Intrinsic Stabilization of Output Rates by Spike-Based Hebbian Learning. Neural Computation, 13, 2709 – 2741.
- Knudsen (1994) Knudsen, E.I. (1994). Supervised learning in the brain. Journal of Neuroscience, 14(7), 3985 – 3997.
- Knudsen (2002) Knudsen, E.I. (2002). Instructed learning in the auditory localization pathway of the barn owl. Nature, 417(6886), 322 – 328.
- Maass (1997a) Maass, W. (1997). Networks of spiking neurons: the third generation of neural network models. Transactions of the Society for Computer Simulation International, emphVol. 14 (4), 1659 – 1671.
- Maass (1997b) Maass, W. (1997). Fast sigmoidal networks via spiking neurons. Neural Computation, 9, 279 – 304.
- Mauk & Buonomano (2004) Mauk, M.D. & Buonomano, D.V. (2004). The Neural Basis of Temporal Processing. Annual Rev. Neuroscience, 27, 304 – 340.
- Neuenschwander & Singer (1996) Neuenschwander, S. & W. Singer (1996). Long-range synchronization of oscillatory light responses in the cat retina and lateral geniculate nucleus. Nature, 379, 728 – 733.
- Ponulak (2006) Ponulak, F. (2006). ReSuMe - Proof of convergence. Available online at: http://d1.cie.put.poznan.pl/dav/fp/FP_ConvergenceProof_TechRep.pdf.
- Ponulak & Kasiński (2010) Ponulak, F. & Kasiński, A. (2010). Supervised learning in spiking neural networks with ReSuMe: Sequence learning, classification, and spike shifting. Neural Computation, 22(2), 467 – 510.
- Rojas (1996) Rojas, R. (1996). Neural Networks - A Systematic Introduction, Springer-Verlag, Berlin.
- van Rossum (2001) van Rossum, M.C. (2001). A novel spike distance. Neural Computation, 13(4), 751 – 63.
- Rostro-Gonzalez et al. (2010) Rostro-Gonzalez, H., Juan Carlos Vasquez-Betancour, J.C., Cessac, B. & Viéville, T. (2010). Reverse-engineering in spiking neural network parameters: exact determinisitc parameter estimation. INRIA Sophia Antipolis.
- Ruf & Schmitt (1997) Ruf, B. & Schmitt, M. (1997). Learning temporally encoded patterns in networks of spiking neurons. Neural Processing Letters, 5(1), 9 – 18.
- Rumelhart et al. (1986) Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1986). Learning internal representations by error propagation. In Rumelhart, D.E. and McClelland, J.L. (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition, Vol. 1, MIT Press, Cambridge, MA.
- Schrauwen & Van Campenhout (2004) Schrauwen, B. & Van Campenhout, J. (2004). Improving Spike-Prop: Enhancements to an Error-Backpropagation Rule for Spiking Neural Networks. Proceedings of the 15th ProRISC Workshop.
- Shepard et al. (2009) Shepard, J.D., Rumbaugh, G., Wu, J., Chowdhiry, S., Plath, N., Kuhl, D., Huganir, R.L., & Worley, P.F. (2009). Arc mediates homoestatoc synaptic scaling of ampa receptors. Neuron, 52(3), 475 – 484.
- Sporea & Grüning (2011) Sporea, I. & Grüning, A. (2011). Reference Time in SpikeProp Proceedings of IEEE International Joint Conference Neural Networks, IJCNN11, 1090 – 1092.
- Takase et al. (2009) Takase, H., Fujita, M., Kawanaka, H., Tsuruoka, S., Kita, H., & Hayashi, T. (2009). Obstacle to training SpikeProp Networks - Cause of surges in training process. Proceedings of IEEE International Joint Conference Neural Networks, IJCNN09, 3062 – 3066.
- Tiňo & Mills (2005) Tiňo, P. & Mills A.J. (2005). Learning beyond finite memory in recurrent networks of spiking neurons. In Wang, L., Chen, K., Ong, Y. (Eds.), Advances in Natural Computation - ICNC 2005, Lecture Notes in Computer Science, 666 – 675.
- Tao et al. (2000) Tao, H.W., Zhang, L.I, Bi, G.Q. & Poo, M. (2000). Selective Presynaptic Propagation of Long-Term Potentiation in Defined Neural Networks. Journal of Neuroscience, 20(9), 3233 – 3243.
- Thorpe & Imbert (1989) Thorpe, S.T. & Imbert, M. (1989). Biological constraints on connectionist modelling. In Pfeifer, R., Schreter, Z., Fogelman-Souli , F., Steels, L. (Eds.), Connectionism in perspective, 63 – 92.
- Wade et al. (2010) Wade, J.J., McDaid, L.J., Santos, J.A. & Sayers, H.M. (2010). SWAT: A Spiking Neural Network Training Algorithm for Classification Problems. IEEE Transactions on Neural Networks, 21(11), 1817 – 1829.
- Watt & Desai (2010) Watt, A.J. & Desai, N.S. (2010). Homeostatic plasticity and STDP: keeping a neuron s cool in a fluctuating world. Frontiers in Synaptic Neuroscience, 2(5).
- Wehr & Laurent (1996) Wehr, M. and Laurent, G. (1996). Odour encoding by temporal sequences of firing in oscillating neural assemblies. Nature, 384, 162 – 166.
- Xin & Embrechts (2001) Xin, J. & Embrechts, M.J. (2001). Supervised Learning with Spiking Neuron Networks. Proceedings of IEEE International Joint Conference Neural Networks, IJCNN01, 1772 – 1777.