, among others. These are massively-parallel, low-power, analog, and/or digital systems (often a mix of the two) that are designed to simulate large-scale artificial neural networks rivalling the scale and complexity of real biological systems. Despite growing excitement, we believe that methods to fully unlock the computational power of neuromorphic hardware are lacking. This is primarily due to a theoretical gap between our traditional, discrete,von Neumann-like understanding of conventional algorithms and the continuous spike-based signal processing of real brains that is often emulated in silicon .
We use the term “neural compiler” to refer loosely to any systematic method of converting an algorithm, expressed in some high-level mathematical language, into synaptic connection weights between populations of spiking neurons. To fully leverage neuromorphic hardware for real-world applications, we require neural compilers that can account for the effects of spiking neuron models and mixed-analog-digital synapse models, and, perhaps more importantly, exploit these details in useful ways when possible. There exist various approaches to neural engineering, including those by Denève et al. [9, 10] and Memmesheimer et al. . However, the Neural Engineering Framework (NEF; [12, 13]) stands apart in terms of software implementation (Nengo; [8, 14, 15, 16]), large-scale cognitive modeling [17, 18], and neuromorphic applications [1, 2, 3, 19, 20, 21, 22, 23]. Competing methods consistently exclude important details, such as refractory periods and membrane voltage leaks, from their networks [9, 11]. The NEF, on the other hand, embraces biological complexity whenever it proves computationally useful  and/or improves contact with neuroscience literature . This approach to biological modeling directly mirrors a similar need to account for the details in neuromorphic hardware when building neural networks [19, 20, 26].
Nevertheless, there are many open problems in optimizing the NEF for state-of-the-art neuromorphics. In particular, we have been working to account for more detailed dynamics in heterogeneous models of the post-synaptic current (PSC) induced by each spike, as well as delays in spike propagation [24, 26, 27, 28]. The purpose of this report is to summarize our methods, both theoretical and practical, that have progressed in this direction. There are similar challenges in extending the NEF to account for multi-compartment neuron models [25, 29], conductance-based synapses , and to minimize the total number of spikes  – but these topics will not be addressed in this report.
2 Accounting for Synaptic Dynamics
This section provides a theoretical account of the effect of higher-order linear synapse models on the dynamics of the network, by summarizing the extensions from  and . This yields two novel proofs of Principle III from the NEF, and generalizes the principle to include more detailed synapses, including those modeling axonal transmission delays.
2.1 Linear systems
Here we focus our attention on linear time-invariant (LTI) systems:
where the time-varying vectorrepresents the system state, the output, the input, and the time-invariant “state-space” matrices fully determine the system’s dynamics. We will omit the variable when not needed.
Principle III from the NEF states that in order to train the recurrent connection weights to implement (1)—using a continuous-time lowpass filter to model the PSC—we use Principle II to train the decoders for the recurrent transformation , input transformation , output transformation , and passthrough transformation [13, pp. 221–225]. This drives the recurrent synapses with the signal so that their output is the signal , in effect transforming the synapses into perfect integrators with respect to . The vector
is then represented by the population of neurons via Principle I. Thus, this provides a systematic approach for training a recurrent neural network to implement any linear dynamical system. Now we show that this approach generalizes to other synaptic models.
For these purposes, the transfer function is a more useful description of the LTI system than (1). The transfer function is defined as the ratio of to , given by the Laplace transforms of and respectively. The variable denotes a complex value in the frequency domain, while is non-negative in the time domain. The transfer function is related to (1) by the following:
The transfer function can be converted into the state-space model if and only if it can be written as a proper ratio of finite polynomials in . The ratio is proper when the degree of the numerator does not exceed that of the denominator. In this case, the output will not depend on future input, and so the system is ‘causal’. The order of the denominator corresponds to the dimensionality of , and therefore must be finite. Both of these conditions can be interpreted as physically realistic constraints where time may only progress forward, and neural resources are finite.
In order to account for the introduction of a synaptic filter , we replace the integrator in (2) with , where . This new system has the transfer function . To compensate for this change in dynamics, we must invert the change-of-variables . This means finding the required such that is equal to the desired transfer function, . We highlight this as the following identity:
For the discrete (i.e., digital synapse) case, we begin with and expressed as digital systems. The form of is usually determined by the hardware, and is usually found by a zero-order hold (ZOH) discretization of using the simulation time-step (), resulting in the discrete LTI system:
Here, we have the same relationship as (2),
Therefore, the previous discussion applies, and we must find an that satisfies:
Then the state-space model satisfying (5) with respect to will implement the desired dynamics (4) given . In either case, the general problem reduces to solving this change-of-variables problem for various synaptic models. We now provide a number of results. More detailed derivations are available in .
Continuous Lowpass Synapse
Replacing the integrator with the standard continuous-time lowpass filter, so that :
which rederives the standard form of Principle III from the NEF .
Discrete Lowpass Synapse
Replacing the integrator with a discrete-time lowpass filter in the -domain with time-step , where :
Therefore, , , , and . This mapping can dramatically improve the accuracy of Principle III in digital simulations (e.g., when using a desktop computer) .
Delayed Continuous Lowpass Synapse
Replacing the integrator with a continuous lowpass filter containing a pure time-delay of length , so that :
where and is the principal branch of the Lambert- function .111This assumes that and , where . This synapse model can be used to model axonal transmission time-delays due to the finite-velocity propagation of action potentials, or to model feedback delays within a broader control-theoretic context. To demonstrate the case where a pure time-delay of length is the desired transfer function (), we let and to obtain the required transfer function:
We then numerically find the Padé approximants of the latter Taylor series. More details and validation may be found in and .
Finally, we consider any linear synapse model of the form:
for some polynomial coefficients of arbitrary degree . To the best of our knowledge, this class includes the majority of linear synapse models used in the literature. For synapses containing a polynomial numerator with degree (e.g., considering the Taylor series expansion of the box filter ), we take its -Padé approximants to transform the synapse into this form within some radius of convergence.222This is equivalent to the approach taken in [26, equations (9)-(11)]. To map onto (11), we begin by defining our solution to in the form of its state-space model:
Since is the -order differential operator, this form of states that we must supply the -order input derivatives , for all . To be more precise, let us first define . Then (12) states that the ideal state-space model must implement the input transformation as a linear combination of input derivatives, . However, if the required derivatives are not included in the neural representation, then it is natural to use a ZOH method by assuming , for all :
The same derivation also applies to the discrete-time domain, with respect to the discrete synapse (corresponding to some implementation in digital hardware):
Here, the only real difference (apart from notation) is the discrete version of (13):
2.2 Nonlinear systems
Here we derive two theorems for nonlinear systems, by taking a different perspective that is consistent with §2.1. This generalizes the approach taken in , which considered the special case of a pulse-extended (i.e., time-delayed) double-exponential.
We wish to implement some desired nonlinear dynamical system,
using (11) as the synaptic filter . Letting for some recurrent function and observing that , we may express these dynamics in the Laplace domain:
since is the differential operator. This proves the following theorem:
For the discrete case, we begin with some desired nonlinear dynamics expressed over discrete time-steps:
using (14) as the synaptic filter , followed by an analogous theorem:
The proof for the discrete case is nearly identical. For sake of completeness, let for some recurrent function and observe that :
since is the forwards time-shift operator. ∎
Continuous Lowpass Synapse
Discrete Lowpass Synapse
Continuous Double Exponential Synapse
For the double exponential synapse:
In the linear case, this simplifies to:
As in §2.1, Theorems 1 and 2 require that we differentiate the desired dynamical system. For the case of nonlinear systems, this means determining the (possibly higher-order) Jacobian(s) of , as shown in (2.2). For the special case of LTI systems, we can determine this analytically to obtain a closed-form expression. By induction it can be shown that:
Then by expanding and rewriting the summations:
The discrete case is identical:
3 Accounting for Synaptic Heterogeneity
We now show how §2 can be applied to train efficient networks where the neuron has a distinct synaptic filter , given by:
This network architecture can be modeled in Nengo using nengolib 0.4.0 . This is particularly useful for applications to neuromorphic hardware, where transistor mismatch can change the effective time-constant(s) of each synapse. To this end, we abstract the approach taken in . We show this specifically for Theorem 1, but this naturally applies to all methods in this report.
Recalling the intuition behind Principle III, our approach is to separately drive each synapse with the required signal such that each PSC becomes the desired representation . Thus, the connection weights to the neuron should be determined by solving the decoder optimization problems for using the methods of §2 with respect to the synapse model . This can be repeated for each synapse to obtain a full set of connection weights. While correct in theory, this approach displays two shortcomings in practice: (1) we must solve optimization problems, where is the number of post-synaptic neurons, and (2) there are weights, which eliminates the space and time efficiency of using factorized weight matrices .
We can solve both issues simultaneously by taking advantage of the linear structure within that is shared between all . Considering Theorem 1, we need to drive the synapse with the function:
Let be the set of decoders optimized to approximate , for all , where . By linearity, the optimal decoders used to represent each may be decomposed as:
Next, we express our estimate of each variableusing the same activity vector :
Now, putting this all together, we obtain:
Therefore, we only need to solve optimization problems, decode the “matrix representation” , and then linearly combine these different decodings as shown in (27)—using the matrix of coefficients —to determine the input to each synapse. This approach reclaims the advantages of using factorized connection weight matrices, at the expense of a factor increase in space and time efficiency.
We have reviewed three major extensions to the NEF that appear in recent publications. These methods can be used to implement linear and nonlinear systems in spiking neurons recurrently coupled with heterogeneous higher-order mixed-analog-digital synapses. This provides us with the ability to implement NEF networks in state-of-the-art neuromorphics while accounting for, and sometimes even exploiting, their nonideal nature.
While the linear and nonlinear methods can both be used to harness pure spike time-delays (due to axonal transmission) by modeling them in the synapse, the linear approach provides greater flexibility. Both extensions can first transform the time-delay into the standard form of (11) via Padé approximants, which maintains the same internal representation as the desired dynamics (within some radius of convergence). But the linear extension also allows the representation to change, since it is only concerned with maintaining the overall input-output transfer function relation. In particular, we derived an analytic solution using the Lambert- function, which allows the neural representation , and even its dimensionality, to change according to the expansion of some Taylor series. The linear case is also much simpler to analyze in terms of the network-level transfer function that results from substituting one synapse model for another. For all other results that we have shown, the linear extension is consistent with the nonlinear extension, as they both maintain the desired representation by fully accounting for the dynamics in the synapse.
We thank Wilten Nicola for inspiring our derivation in §2.2 with his own phase-space derivation of Principle III using double exponential synapses for autonomous systems (unpublished). We also thank Kwabena Boahen and Terrence C. Stewart for providing the idea used in §3 to separately drive each , and for improving this report through many helpful discussions.
-  A. Mundy, J. Knight, T. C. Stewart, and S. Furber, “An efficient SpiNNaker implementation of the Neural Engineering Framework,” in The 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8, IEEE, 2015.
-  S. Choudhary, S. Sloan, S. Fok, A. Neckar, E. Trautmann, P. Gao, T. Stewart, C. Eliasmith, and K. Boahen, “Silicon neurons that compute,” in International Conference on Artificial Neural Networks (ICANN), pp. 121–128, Springer, 2012.
-  “Projects - The neuromorphics project - Stanford University.” http://brainstorm.stanford.edu/projects/. Accessed: 2017-08-12.
-  P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura, et al., “A million spiking-neuron integrated circuit with a scalable communication network and interface,” Science, vol. 345, no. 6197, pp. 668–673, 2014.
-  J. Schemmel, D. Briiderle, A. Griibl, M. Hock, K. Meier, and S. Millner, “A wafer-scale neuromorphic hardware system for large-scale neural modeling,” in IEEE International Symposium on Circuits and systems (ISCAS), pp. 1947–1950, IEEE, 2010.
-  N. Qiao, H. Mostafa, F. Corradi, M. Osswald, F. Stefanini, D. Sumislawska, and G. Indiveri, “A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128k synapses,” Frontiers in neuroscience, vol. 9, 2015.
-  K. Boahen, “A neuromorph’s prospectus,” Computing in Science & Engineering, vol. 19, no. 2, pp. 14–28, 2017.
-  T. Bekolay, J. Bergstra, E. Hunsberger, T. DeWolf, T. C. Stewart, D. Rasmussen, X. Choo, A. R. Voelker, and C. Eliasmith, “Nengo: A Python tool for building large-scale functional brain models,” Frontiers in Neuroinformatics, vol. 7, no. 48, 2014.
-  M. Boerlin, C. K. Machens, and S. Denève, “Predictive coding of dynamical variables in balanced spiking networks,” PLoS Comput Biol, vol. 9, no. 11, p. e1003258, 2013.
-  M. A. Schwemmer, A. L. Fairhall, S. Denève, and E. T. Shea-Brown, “Constructing precisely computing networks with biophysical spiking neurons,” The Journal of Neuroscience, vol. 35, no. 28, pp. 10112–10134, 2015.
-  D. Thalmeier, M. Uhlmann, H. J. Kappen, and R.-M. Memmesheimer, “Learning universal computations with spikes,” PLoS Comput Biol, vol. 12, no. 6, p. e1004895, 2016.
-  C. Eliasmith and C. H. Anderson, “Developing and applying a toolkit from a general neurocomputational framework,” Neurocomputing, vol. 26, pp. 1013–1018, 1999.
-  C. Eliasmith and C. H. Anderson, Neural engineering: Computation, representation, and dynamics in neurobiological systems. MIT press, 2003.
-  T. C. Stewart, B. Tripp, and C. Eliasmith, “Python scripting in the Nengo simulator,” Frontiers in Neuroinformatics, vol. 3, 2009.
-  S. Sharma, S. Aubin, and C. Eliasmith, “Large-scale cognitive model design using the Nengo neural simulator,” Biologically Inspired Cognitive Architectures, 2016.
-  J. Gosmann and C. Eliasmith, “Automatic optimization of the computation graph in the Nengo neural network simulator,” Frontiers in Neuroinformatics, vol. 11, p. 33, 2017.
-  C. Eliasmith, T. C. Stewart, X. Choo, T. Bekolay, T. DeWolf, Y. Tang, and D. Rasmussen, “A large-scale model of the functioning brain,” science, vol. 338, no. 6111, pp. 1202–1205, 2012.
-  C. Eliasmith, How to build a brain: A neural architecture for biological cognition. Oxford University Press, 2013.
-  J. Dethier, P. Nuyujukian, C. Eliasmith, T. C. Stewart, S. A. Elasaad, K. V. Shenoy, and K. A. Boahen, “A brain-machine interface operating with a real-time spiking neural network control algorithm,” in Advances in Neural Information Processing Systems (NIPS), pp. 2213–2221, 2011.
-  F. Corradi, C. Eliasmith, and G. Indiveri, “Mapping arbitrary mathematical functions and dynamical systems to neuromorphic VLSI circuits for spike-based neural computation,” in IEEE International Symposium on Circuits and Systems (ISCAS), (Melbourne), 2014.
-  J. Knight, A. R. Voelker, A. Mundy, C. Eliasmith, and S. Furber, “Efficient SpiNNaker simulation of a heteroassociative memory using the Neural Engineering Framework,” in The 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, 07 2016.
-  M. Berzish, C. Eliasmith, and B. Tripp, “Real-time FPGA simulation of surrogate models of large spiking networks,” in International Conference on Artificial Neural Networks (ICANN), 2016.
-  A. Mundy, Real time Spaun on SpiNNaker. PhD thesis, University of Manchester, 2016.
-  A. R. Voelker and C. Eliasmith, “Improving spiking dynamical networks: Accurate delays, higher-order synapses, and time cells,” (under review), 2017.
-  C. Eliasmith, J. Gosmann, and X.-F. Choo, “BioSpaun: A large-scale behaving brain model with complex neurons,” ArXiv, 2016.
-  A. R. Voelker, B. V. Benjamin, T. C. Stewart, K. Boahen, and C. Eliasmith, “Extending the Neural Engineering Framework for nonideal silicon synapses,” in IEEE International Symposium on Circuits and Systems (ISCAS), (Baltimore, MD), IEEE, 05 2017.
-  A. R. Voelker and C. Eliasmith, “Methods and systems for implementing dynamic neural networks,” (patent pending), 07 2016.
-  “Nengolib – Additional extensions and tools for modelling dynamical systems in Nengo.” https://github.com/arvoelke/nengolib/. Accessed: 2017-08-12.
-  P. Duggins, “Incorporating biologically realistic neuron models into the NEF,” Master’s thesis, University of Waterloo, Waterloo, ON, 2017.
-  A. Stöckel, “Point neurons with conductance-based synapses in the Neural Engineering Framework,” tech. rep., Centre for Theoretical Neuroscience, Waterloo, ON, 2017.
-  T. C. Stewart, “A technical overview of the Neural Engineering Framework,” tech. rep., Centre for Theoretical Neuroscience, Waterloo, ON, 2012.
-  R. M. Corless, G. H. Gonnet, D. E. Hare, D. J. Jeffrey, and D. E. Knuth, “On the Lambert W function,” Advances in Computational mathematics, vol. 5, no. 1, pp. 329–359, 1996.