Versatile emulation of spiking neural networks on an accelerated neuromorphic substrate

12/30/2019 ∙ by Sebastian Billaudelle, et al. ∙ 22

We present first experimental results on the novel BrainScaleS-2 neuromorphic architecture based on an analog neuro-synaptic core and augmented by embedded microprocessors for complex plasticity and experiment control. The high acceleration factor of 1000 compared to biological dynamics enables the execution of computationally expensive tasks, by allowing the fast emulation of long-duration experiments or rapid iteration over many consecutive trials. The flexibility of our architecture is demonstrated in a suite of five distinct experiments, which emphasize different aspects of the BrainScaleS-2 system.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The unifying principle behind all neuromorphic architectures lies in their attempt to emulate certain structural and dynamical aspects of biological nervous systems in order to inherit some of their well-known functional and metabolic advantages over conventional silicon substrates. However, precisely what these aspects are remains a holy grail of computational neuroscience, and our best attempts at answering this question are still rather conjectural. This state of active exploration is reflected by the broad diversity of the current neuromorphic landscape [41].

Consequently, our approach to neuromorphic engineering is explicitly geared towards building systems that can serve as scientific tools for studying this question. By adhering to a restricted set of biologically inspired principles, we enable an efficient implementation in silico with respect to both emulation speed and power consumption. Additionally, our proposed architecture emphasizes precision, scalability and, in particular, a substantial degree of flexibility. This relates not only to the wide-ranged configurability of neuro-synaptic parameters and connectivity, but most importantly to the ability of influencing our circuits during emulation, which goes significantly beyond synaptic plasticity, as outlined below.

In this manuscript, we describe the core principles underlying the BrainScaleS-2 architecture, followed by the emulation of a diverse set of spiking neural networks, which we have chosen to emphasize different aspects of the chip’s operation and capabilities, as well as different computational principles that we believe are relevant for biological information processing.

Ii Architecture

BrainScaleS-2 is a family of mixed-signal neuromorphic systems. It is centered around an analog neural network core implementing neuron and synapse circuits that behave similarly to their biological archetypes (

Fig. 1).

Fig. 1: Simplified block-level schematic of a BrainScaleS-2 ASIC. The analog neuromorphic core is surrounded by event transport logic and control logic, including controllers for full-custom configuration SRAM. Details and components that lie beyond the scope of this paper were omitted.

The neurons [2] feature lif dynamics with synaptic currents modeled as superpositions of spike-triggered exponential kernels. The membrane is connected by a programmable conductance to a reset potential for a finite refractory period as soon as it crosses a certain threshold. Additional mechanisms such as neuronal adaptation and exponential near-threshold dynamics [1] or dendritic interaction [35] enable the emulation of more complex structure and dynamics. All neurons are individually configurable via an on-chip analog parameter memory [19] and a set of digital control values.

Voltages and currents are – scaled to utilize the available dynamic range – directly represented in the respective circuits and evolve in continuous time. Leveraging the intrinsic capacitances and conductances of the technology, time constants of neuron and synapse dynamics are rendered 1000 times smaller compared to typical values found in biology. This thousandfold acceleration facilitates the execution of time-consuming tasks, such as performing high-dimensional parameter sweeps, the investigation of learning and metalearning, or statistical computations requiring large volumes of data [6, 4].

Each neuron is associated with a column of synapse circuits [13], which receive their inputs from the chip’s digital backend. The synaptic weight is stored in local sram. It further holds a label, enabling synapses to filter afferent events tagged with their respective source address. Each synapse also implements an analog circuit for measuring pairwise correlations between pre- and post-synaptic spike events [13], enabling access to various forms of stdp. The analog correlation traces are accessible via cadc, which also allow the digitization of neuronal observables such as the membrane potential.

The versatility of the BrainScaleS-2 architecture is substantially augmented by the incorporation of freely programmable embedded microprocessors [13]

. Together with their simd vector units, which are tightly coupled to the synapse arrays’ sram controllers and the cadc, they form ppu for efficient control of synaptic plasticity. Access to the on-chip configuration bus further allows the processor to also reconfigure all other components of the neuromorphic system during experiment execution. The ppu can thus be used for a vast array of applications such as near-arbitrary learning rules, on-line circuit calibration, structural network reconfiguration, or the co-simulation of an environment capable of continuous interaction with the network running on the neuromorphic core.

Iii Experiments

Iii-a Deep learning using precise spike timing

In many applications, the time and energy to solution represent essential commodities. For spiking networks, optimal use of these resources often imposes to have as few and as early spikes as possible. However, the discrete nature of spikes makes it difficult to apply conventional machine learning algorithms based on differentiable loss functions.

Fig. 2: Pattern recognition with time-to-first-spike coding. A) Hierarchical network structure and neuron numbers per layer. B) Input () spike times encode input pixel brightness. Activity propagates through the hidden layer () to the label layer (). There, classification is determined by the identity of the first neuron to spike (red). C) Training/test set consisting of four patterns. D) Accuracy increase and corresponding decrease of loss during learning. E) Evolution of label neuron spike times during training for one example image.

In the time-to-first-spike coding scheme, a neuron encodes a continuous variable as the time elapsed before its first spike. The decision of a network performing a classification task is given by the first neuron to spike in the label layer (Fig. 2A,B). For such networks, an efficient gradient-descent-based learning scheme was first proposed in [28]

, using error backpropagation on a continuous function of output spike times.

We have generalized this method to include an exact, closed-form expression for finite membrane time constants [16, 15] and applied it to a 3-layer network emulated on BrainScaleS-2 (Fig. 2). The loss was calculated as the cross-entropy of a softmax function on negative spike times, in order to maximize the distance between correct and incorrect label layer spikes. Fig. 2D shows its evolution during training and the associated classification accuracy for a simple 4-class learning task (Fig. 2C). The evolution of the label neuron spike times for one example class is shown in Fig. 2E. The robustness of both the applied learning rule and the emulated network dynamics is evidenced by the clear separation of first-spike times.

A particularly appealing feature of this implementation is its extreme communication sparsity, with only one input spike per input variable and at most one spike per emulated neuron before classification. After learning, the emulated network needed less than

to classify an image. This duration scales proportionally to the chosen synaptic and membrane time constants, which in our case were set to

. Taking into consideration relaxation times between patterns, our setup is able to handle a pattern throughput of at least , independently of emulated network size [15].

Iii-B Sampling-based Bayesian computation

The Bayesian brain hypothesis [9] aspires to explain how the mammalian brain can operate in a probabilistic sea of sensory data. In [30]

, it was shown how networks of LIF neurons can learn to perform Bayesian inference through sampling on high-dimensional data distributions

[24, 8].

Fig. 3: Spike-based Bayesian inference. A) Schematic of a random spiking sampling network. B)

Membrane voltages of three selected neurons and visualisation of the spike-based representation of binary random variables.

C) Sampling performance after training for 500 randomly generated target distributions. D) Sampling from the learned (top) and an associated conditional distribution (bottom). Orange: sampled distribution. Blue: analytically calculated target distribution. Remaining error bars are too small to visualize.

In this quintessentially spike-based framework, neurons become stochastic due to background spiking input, thereby lending themselves to the representation of binary random variables: during post-spike refractoriness, a neuron is considered to be in the state , and otherwise (Fig. 3

A,B). With appropriate synaptic connections, the resulting network dynamics inherently generate a sequence of samples from the learned distribution. This enables the training of spiking networks to perform sampling-based Bayesian inference in arbitrary binary probability spaces, with applications to generative as well as discriminative problems

[33, 23].

Contrastive Hebbian learning [18] was performed with the hardware in the loop, i.e., with updates being calculated on a host PC [36, 23]. Each training step was run for of hardware time, corresponding to bio time and approximately samples.

Training was monitored using the Kullback-Leibler divergence between sampled and target distribution (

Fig. 3C). After training, the network reliably sampled from its target distribution and from associated conditional distributions (Bayesian inference, Fig. 3D). Compared to previous neuromorphic realizations of neural sampling with analog neurons [31, 23], the BrainScaleS-2 system allows unprecedented precision, while still enabling fast inference due to its thousandfold acceleration.

Iii-C Reinforcement learning

Recent advances in reinforcement learning have enabled artificial systems to achieve unprecedented performance in board and computer games

[38]. As a learning principle with clear roots in neurobiology, it is also of interest as a framework for neuromorphic agents to optimize their performance through repeated interaction with an environment.

Fig. 4: Reinforcement learning with reward-modulated STDP. A) The ppu simulates a simplified version of Pong. The horizontal position of the ball serves as input for a 2-layer neural network, with the resulting output dictating the target paddle position. The network receives reward based on its aiming accuracy. B) Playing performance during learning. C) Synaptic depression automatically adapts to the excitability of neurons. D, E) Wall-clock duration and power consumption of a single iteration on BrainScaleS-2 (blue) and an equivalent software simulation using NEST (orange).

Three-factor learning rules [11] can implement reinforcement learning in spiking neural networks using a global neuromodulator and local observables such as spike rates. As already shown in [42], the BrainScaleS-2 architecture supports the implementation of an R-STDP learning rule [12, 11] in a closed-loop setup contained fully on chip. Its application to a simplified version of the Pong video game is shown in Fig. 4A. The network dynamics were emulated by the neuromorphic substrate, while the embedded plasticity processor took on a dual role. First, it simulated the game dynamics, creating a host-independent setup. Second, it calculated the plasticity updates using the synaptically stored correlation traces according to , where is the reward, its moving average and an stdp-like eligibility trace.

During training, the network learned to keep the ball close to the middle of the paddle (Fig. 4B). Implicitly, the experiment also demonstrates how learning can compensate fixed-pattern noise in the analog neuro-synaptic circuits (Fig. 4C): while the excitability of uncalibrated neurons varied significantly due to mismatch effects, synapses that would negatively impact correct tracking of the ball were systematically depressed to a subthreshold strength with respect to their postsynaptic neuron. Furthermore, this setup demonstrates the speed and power advantages of the BrainScaleS-2 architecture compared to software simulations, as shown in Fig. 4D/E.

Iii-D Structural plasticity

Synaptic plasticity is well known to not only be limited to adjusting the strength of synapses; the connectome itself undergoes continuous change during the lifetime of an individual [20, 26, 21]. By constraining the number of expressed synapses to enforce a certain level of sparsity, the nervous system appears to manage its spatial and energetic budget [22]. Similar constraints apply to all physical information-processing systems, with neuromorphic ones being no exception. In particular, the synaptic fan-in of silicon neurons is often limited.

Fig. 5: Self-organizing receptive fields through structural plasticity. A) Spike trains from different sources can be injected into a single synaptic row. Each synapse filters afferent spikes according to a locally stored label. B) A network endowed with structural plasticity learns to discriminate between types of Iris flowers (dataset represented by colored dots). The receptor distribution after training is adapted to the input data distribution. C)Feature selection through structural plasticity allows the conservation of classification performance even for strongly enforced sparsity .

We implemented a synaptic update policy that incorporates structural plasticity, enabling neurons to dynamically select a set of suitable synapses out of a pool of potential connections, that optimizes performance for a chosen task while maintaining a sparse connectome [3]. The learning rule is composed of three parts: an stdp term that potentiates correlated connections, a homeostatic regularizer that limits post-synaptic firing rates and encourages synaptic competition, and a stochastic component that induces exploration. A pruning condition is executed periodically, removing synapses with a weight below a certain threshold and randomly reassigning them.

Structural plasticity is enabled by bundling presynaptic sources and injecting them into a single synapse row (Fig. 5A): as each synapse can only gate one of these to its home neuron, pruning and reassigning of a synapse is simply implemented by changing its label. The reconfiguration is thereby fully local and, in particular, does not involve time-consuming sorting of routing tables or connectivity lists [25]. If bundles are disjoint, their size also effectively sets the synaptic sparsity to .

We applied the above algorithm to a supervised learning task, where the network was trained to classify the Iris data set

[10]. We randomly placed 48 receptor neurons on the two-dimensional feature plane spanned by petal width and length. The firing rate of a receptor was set to increase with its proximity to a presented data point. In three separate scenarios, the resulting input spike trains were injected into , , and synaptic rows, leading to three different levels of sparsity: each label neuron could only see , , and of the receptors at each point in time, respectively. During training, teacher stimuli ensured that the correct label neurons were excited when an input belonging to their respective class was presented.

The emulated plasticity rule led to self-organized reconfiguration of their receptive fields (Fig. 5B), as the correlation between teacher signal and receptor proximity to the presented data drove the potentiation of associated synaptic weights. For higher degrees of enforced sparsity, convergence times were longer, as the search for relevant inputs in the feature space became statistically more challenging. Ultimately however, the learning rule enabled the network to achieve near-perfect classification in all three scenarios (Fig. 5C), demonstrating its ability to ensure a better utilization of synaptic resources without prior knowledge of the input data.

Iii-E Insect navigation

Recent developments in biological imaging and data processing have facilitated unprecedented insight into numerous functional aspects of insect brains [5, 40, 39]. For example, it has been shown that a structure known as the central complex is involved in navigational behavior [29]. Based on physiological data from the bee’s central complex and following [37], we emulated a network for path integration (Fig. 6A) that reproduces bees’ ability to return to their nest’s location after exploring the environment for sources of food.

Each experiment started with a spread-out phase, in which a virtual insect performed a random walk starting from a certain origin. During this phase, the modeled network had no effect on the insect motion but was only provided with the sensory data of the absolute head orientation and the optical flow field of a left and right eye. In the second part of the experiment, the return phase, the insect’s motion was determined by the motor neurons, which were part of the network. The insect’s head orientation was encoded by four spike sources that each represented a cardinal direction similar to a compass. The optical flow field was similarly represented by two spike generators that fired with a rate proportional to the optical flow as derived from the left and right eye, respectively (FL and FR). Moreover, the two motor neurons (ML and MR) steered the insect by providing propulsion on the left or right hand side, similar to a tank drive.

While the model in [37] comprises 90 fire-rate-neurons with floating-point precision, the network on BrainScaleS-2 achieved about the same functional performance with only 18 neurons. Additionally, we implemented the short-term memory mechanism employed by the integrator neurons to store directional distance as a synaptic mechanism.

Fig. 6: Virtual insectoid agent performing path integration on BrainScaleS-2. A) Network schematic and activity histogram. The information flows from the sensory layer at the top through an integration and a steering layer to the motor neurons at the bottom. R and L indicate the right and left side, respectively. B) A typical trajectory of the virtual insect which turns to random looping around the home position upon reaching it. C) Overlay of 100 trajectories like in B), each with a different random outbound journey.

The total flight duration was set to on the hardware, which corresponds to in biology. In that time, sensory information and steering signals were exchanged between body and brain every . During the first the insect performed a random outbound journey, after which it returned to the nest. Sample trajectories can be seen in Fig. 6B,C. The average spike rate of all neurons and spike generators was ( bio), which is in good agreement with experimental data from drosophila [17] or locusts [27].

In this experiment, the ppu handled multiple tasks: the processing of synaptic modulations for the integrator neurons, the simulation of the environment, an emulation of all sensors including the corresponding spike stimuli, the translation of neuronal data into actions of motion, and the entire experiment control. Apart from the setup and readout phase, the experiment ran entirely self-contained on the BrainScaleS-2 system.

Iv Discussion and outlook

In a post-Moore era, neuromorphic circuits represent a promising venue for advancing the computational capabilities of silicon. This manuscript motivates how, by coupling the advantages of analog circuits with the flexibility of general-purpose digital computation and control, our BrainScaleS-2 architecture contributes to the research-oriented territory of the neuromorphic landscape. In our endeavor, we share a common goal with other promising architectures such as [14] and [7], which follow radically different design paradigms with advantages and drawbacks of their own.

A key aspect that we do not address above, but ultimately decides the value of such systems for computational neuroscience research, is their scalability. Following integration concepts first proposed in [34] and studied in, e.g., [32, 36], the BrainScaleS-2 architecture is explicitly designed to scale up to large, multi-chip systems. These will conserve the network-size-independent speedup and energy efficiency that we have addressed in our above experiments, thus providing access to spiking network studies that are otherwise prohibitive for simulation software running on conventional substrates.

Acknowledgements

We gratefully acknowledge funding from the European Union under grant agreements 604102, 720270, 785907 (HBP) and the Manfred Stärk Foundation.

References

  • [1] S. A. Aamir, P. Müller, G. Kiene, L. Kriener, Y. Stradmann, A. Grübl, J. Schemmel, and K. Meier (2018-10) A mixed-signal structured adex neuron for accelerated neuromorphic cores. IEEE Transactions on Biomedical Circuits and Systems 12 (5), pp. 1027–1037. External Links: Document, ISSN Cited by: §II.
  • [2] S. A. Aamir, Y. Stradmann, P. Müller, C. Pehle, A. Hartel, A. Grübl, J. Schemmel, and K. Meier (2018-12) An accelerated lif neuronal network array for a large-scale mixed-signal neuromorphic architecture. IEEE Transactions on Circuits and Systems I: Regular Papers 65 (12), pp. 4299–4312. External Links: Document, ISSN Cited by: §II.
  • [3] S. Billaudelle, B. Cramer, M. A. Petrovici, K. Schreiber, D. Kappel, J. Schemmel, and K. Meier (2019) Structural plasticity on an accelerated analog neuromorphic hardware system. arXiv preprint arXiv:1912.12047. Cited by: §III-D.
  • [4] T. Bohnstingl, F. Scherr, C. Pehle, K. Meier, and W. Maass (2019) Neuromorphic hardware learns to learn. Frontiers in neuroscience 13. Cited by: §II.
  • [5] A. Chiang, C. Lin, C. Chuang, H. Chang, C. Hsieh, C. Yeh, C. Shih, J. Wu, G. Wang, Y. Chen, et al. (2011) Three-dimensional reconstruction of brain-wide wiring networks in drosophila at single-cell resolution. Current Biology 21 (1), pp. 1–11. Cited by: §III-E.
  • [6] B. Cramer, D. Stöckel, M. Kreft, J. Schemmel, K. Meier, and V. Priesemann (2019) Control of criticality and computation in spiking neuromorphic networks with plasticity. arXiv preprint arXiv:1909.08418. Cited by: §II.
  • [7] M. Davies, N. Srinivasa, T. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain, et al. (2018) Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38 (1), pp. 82–99. Cited by: §IV.
  • [8] D. Dold, I. Bytschok, A. F. Kungl, A. Baumbach, O. Breitwieser, W. Senn, J. Schemmel, K. Meier, and M. A. Petrovici (2019) Stochasticity from function—why the bayesian brain may need no noise. Neural Networks 119, pp. 200–213. Cited by: §III-B.
  • [9] K. Doya, S. Ishii, A. Pouget, and R. P. Rao (2007) Bayesian brain: probabilistic approaches to neural coding. MIT press. Cited by: §III-B.
  • [10] R. A. Fisher (1936) The use of multiple measurements in taxonomic problems. Annals of eugenics 7 (2), pp. 179–188. Cited by: §III-D.
  • [11] N. Frémaux and W. Gerstner (2016) Neuromodulated spike-timing-dependent plasticity, and theory of three-factor learning rules. Frontiers in neural circuits 9, pp. 85. Cited by: §III-C.
  • [12] N. Frémaux, H. Sprekeler, and W. Gerstner (2010) Functional requirements for reward-modulated spike-timing-dependent plasticity. Journal of Neuroscience 30 (40), pp. 13326–13337. Cited by: §III-C.
  • [13] S. Friedmann, J. Schemmel, A. Grübl, A. Hartel, M. Hock, and K. Meier (2017) Demonstrating hybrid learning in a flexible neuromorphic hardware system. IEEE Transactions on Biomedical Circuits and Systems 11 (1), pp. 128–142. External Links: Document, ISSN 1932-4545 Cited by: §II, §II.
  • [14] S. B. Furber, F. Galluppi, S. Temple, and L. A. Plana (2014) The spinnaker project. Proceedings of the IEEE 102 (5), pp. 652–665. Cited by: §IV.
  • [15] J. Göltz, A. Baumbach, S. Billaudelle, O. Breitwieser, D. Dold, L. Kriener, A. F. Kungl, W. Senn, J. Schemmel, K. Meier, et al. (2019) Fast and deep neuromorphic learning with time-to-first-spike coding. arXiv preprint arXiv:1912.11443. Cited by: §III-A, §III-A.
  • [16] J. Göltz (2019-04) Training deep networks with time-to-first-spike coding on the brainscales wafer-scale system. Masterarbeit, Universität Heidelberg. External Links: Link Cited by: §III-A.
  • [17] N. W. Gouwens and R. I. Wilson (2009) Signal propagation in drosophila central neurons. Journal of Neuroscience 29 (19), pp. 6239–6249. Cited by: §III-E.
  • [18] G. E. Hinton, T. J. Sejnowski, and D. H. Ackley (1984) Boltzmann machines: constraint satisfaction networks that learn. Carnegie-Mellon University, Department of Computer Science Pittsburgh. Cited by: §III-B.
  • [19] M. Hock, A. Hartel, J. Schemmel, and K. Meier (2013-09) An analog dynamic memory array for neuromorphic hardware. In Circuit Theory and Design (ECCTD), 2013 European Conference on, pp. 1–4. External Links: Document Cited by: §II.
  • [20] A. J. Holtmaat, J. T. Trachtenberg, L. Wilbrecht, G. M. Shepherd, X. Zhang, G. W. Knott, and K. Svoboda (2005) Transient and persistent dendritic spines in the neocortex in vivo. Neuron 45 (2), pp. 279–291. Cited by: §III-D.
  • [21] D. Kappel, S. Habenschuss, R. Legenstein, and W. Maass (2015) Synaptic sampling: a bayesian approach to neural network plasticity and rewiring. In Advances in Neural Information Processing Systems, pp. 370–378. Cited by: §III-D.
  • [22] A. Knoblauch and F. T. Sommer (2016) Structural plasticity, effectual connectivity, and memory in cortex. Frontiers in neuroanatomy 10, pp. 63. Cited by: §III-D.
  • [23] A. F. Kungl, S. Schmitt, J. Klähn, P. Müller, A. Baumbach, D. Dold, A. Kugele, N. Gürtler, E. Müller, C. Koke, et al. (2018) Generative models on accelerated neuromorphic hardware. arXiv preprint arXiv:1807.02389. Cited by: §III-B, §III-B, §III-B.
  • [24] L. Leng, R. Martel, O. Breitwieser, I. Bytschok, W. Senn, J. Schemmel, K. Meier, and M. A. Petrovici (2018) Spiking neurons with short-term synaptic plasticity form superior generative networks. Scientific reports 8 (1), pp. 10651. Cited by: §III-B.
  • [25] C. Liu, G. Bellec, B. Vogginger, D. Kappel, J. Partzsch, F. Neumärker, S. Höppner, W. Maass, S. B. Furber, R. Legenstein, et al. (2018)

    Memory-efficient deep learning on a spinnaker 2 prototype

    .
    Frontiers in neuroscience 12. Cited by: §III-D.
  • [26] Y. Loewenstein, A. Kuras, and S. Rumpel (2011)

    Multiplicative dynamics underlie the emergence of the log-normal distribution of spine sizes in the neocortex in vivo

    .
    Journal of Neuroscience 31 (26), pp. 9481–9488. Cited by: §III-D.
  • [27] L. C. Moreaux and G. Laurent (2007) Estimating firing rates from calcium signals in locust projection neurons in vivo. Frontiers in neural circuits 1, pp. 2. Cited by: §III-E.
  • [28] H. Mostafa (2017) Supervised learning based on temporal coding in spiking neural networks. IEEE transactions on neural networks and learning systems 29 (7), pp. 3227–3235. Cited by: §III-A.
  • [29] K. Neuser, T. Triphan, M. Mronz, B. Poeck, and R. Strauss (2008) Analysis of a spatial orientation memory in drosophila. Nature 453 (7199), pp. 1244. Cited by: §III-E.
  • [30] M. A. Petrovici, J. Bill, I. Bytschok, J. Schemmel, and K. Meier (2016) Stochastic inference with spiking neurons in the high-conductance state. Physical Review E 94 (4), pp. 042312. Cited by: §III-B.
  • [31] M. A. Petrovici, S. Schmitt, J. Klähn, D. Stöckel, A. Schroeder, G. Bellec, J. Bill, O. Breitwieser, I. Bytschok, A. Grübl, et al. (2017) Pattern representation and recognition with accelerated analog neuromorphic systems. In 2017 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–4. Cited by: §III-B.
  • [32] M. A. Petrovici, B. Vogginger, P. Müller, O. Breitwieser, M. Lundqvist, L. Muller, M. Ehrlich, A. Destexhe, A. Lansner, R. Schüffny, et al. (2014) Characterization and compensation of network-level anomalies in mixed-signal neuromorphic modeling platforms. PloS one 9 (10), pp. e108590. Cited by: §IV.
  • [33] D. Probst, M. A. Petrovici, I. Bytschok, J. Bill, D. Pecevski, J. Schemmel, and K. Meier (2015) Probabilistic inference in discrete spaces can be implemented into networks of lif neurons. Frontiers in computational neuroscience 9, pp. 13. Cited by: §III-B.
  • [34] J. Schemmel, D. Brüderle, A. Grübl, M. Hock, K. Meier, and S. Millner (2010) A wafer-scale neuromorphic hardware system for large-scale neural modeling. In Proceedings of 2010 IEEE International Symposium on Circuits and Systems, pp. 1947–1950. Cited by: §IV.
  • [35] J. Schemmel, L. Kriener, P. Müller, and K. Meier (2017) An accelerated analog neuromorphic hardware system emulating nmda-and calcium-based non-linear dendrites. In 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2217–2226. Cited by: §II.
  • [36] S. Schmitt, J. Klähn, G. Bellec, A. Grübl, M. Güttler, A. Hartel, S. Hartmann, D. Husmann, K. Husmann, S. Jeltsch, et al. (2017) Neuromorphic hardware in the loop: training a deep spiking network on the brainscales wafer-scale system. In 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2227–2234. Cited by: §III-B, §IV.
  • [37] T. Stone, B. Webb, A. Adden, N. B. Weddig, A. Honkanen, R. Templin, W. Wcislo, L. Scimeca, E. Warrant, and S. Heinze (2017) An anatomically constrained model for path integration in the bee brain. Current Biology 27 (20), pp. 3069–3085. Cited by: §III-E, §III-E.
  • [38] R. S. Sutton and A. G. Barto (2018) Reinforcement learning: an introduction. MIT press. Cited by: §III-C.
  • [39] S. Takemura, Y. Aso, T. Hige, A. Wong, Z. Lu, C. S. Xu, P. K. Rivlin, H. Hess, T. Zhao, T. Parag, et al. (2017) A connectome of a learning and memory center in the adult drosophila brain. Elife 6, pp. e26975. Cited by: §III-E.
  • [40] S. Takemura, A. Bharioke, Z. Lu, A. Nern, S. Vitaladevuni, P. K. Rivlin, W. T. Katz, D. J. Olbris, S. M. Plaza, P. Winston, et al. (2013) A visual motion detection circuit suggested by drosophila connectomics. Nature 500 (7461), pp. 175. Cited by: §III-E.
  • [41] C. S. T. Thakur, J. Molin, G. Cauwenberghs, G. Indiveri, K. Kumar, N. Qiao, J. Schemmel, R. M. Wang, E. Chicca, J. Olson Hasler, et al. (2018) Large-scale neuromorphic spiking array processors: a quest to mimic the brain. Frontiers in neuroscience 12, pp. 891. Cited by: §I.
  • [42] T. Wunderlich, A. F. Kungl, E. Müller, A. Hartel, Y. Stradmann, S. A. Aamir, A. Grübl, A. Heimbrecht, K. Schreiber, D. Stöckel, et al. (2019) Demonstrating advantages of neuromorphic computation: a pilot study. Frontiers in Neuroscience 13, pp. 260. Cited by: §III-C.