. The neuron models used in these machine learning applications are heavily simplified compared to the biological example. The fact that such a simple copy of the basic architecture of the nervous system is already capable of impressive results encourages many scientists that even more powerful computing devices might be built by looking more closely at nature’s principles. The hope is that even without a full understanding of the operation of the brain, studying its architecture leads to new inspirations for the development of novel computing systems.
Testing such concepts is mostly done with numerical simulations, often using standardized software packages [4, 5]. These tools allow a step up in complexity from the Perceptron model by using spike-based neuron models. Spike-based models cover a wide range of complexities, ranging from the basic Integrate-and-Fire models up to Hodgkin-Huxley-like models incorporating a multitude of different ion-channel kinetics .
It has been shown by several research groups that prominent concepts used with Perceptron models, for example sampling based approaches [7, 8] or deep-learning using back-propagation , can be transferred successfully to spiking models. Comparing typical benchmark problems, many spike-based implementations reach similar scores as their rate-based examples , but they execute orders of magnitude slower on the commonly used hpc platforms.
It gets even worse if the training methods are constrained to biologically plausible mechanisms, which requires that at short timescales all information exchange is done by spikes only. This forbids for example the implementation of back-propagation as it is currently used in the standard deep-learning software packages . Luckily, first ideas how to circumvent these problems have been reported lately 
, where the network learns not only its objective but also the correct mapping of the error information from the output backwards to the upstream synapses.
Recent findings in biology  have inspired novel models [3, 16] which include the three-dimensional structure of the neuron. The dendritic tree is no longer treated as a compartmentalized cable-equation , but as a complex non-linear structure which allows multi-layer information processing and coincidence detection within a single neuron .
Including all these details into numeric simulations strongly enhances the performance problems already surfacing with spiking neuron models. The work in this paper presents an alternative approach to the numerical modeling of multi-compartmental, non-linear dendrites, using physical-model-based neuromorphic hardware. It builds upon the BrainScaleS accelerated analog neuromorphic system  in conjunction with the built-in hybrid plasticity extension developed for the BrainScaleS 2 system. It expands these concepts by incorporating novel circuits to mimic non-linear dendritic behavior, including an emulation of nmda and calcium channels.
In  a neuromorphic chip is presented that also contains nmda emulation circuitry and has adapted many of the features of the BrainScaleS system, but implementing them in the real-time domain using sub-threshold point neurons, in contrast to the accelerated emulation used in the circuits presented in this publication. Some authors have reported neuromorphic circuits incorporating aspects of nmda-R behavior to achieve a certain functionality, for example  and . In contrast to those, our approach is not targeted at a single functional model, but aims to be a universal experimental platform. The presented implementation is also inherently scalable in the framework of the BrainScaleS system. Neuromorphic hardware concepts based on non-linear dendrites have also been previously reported , demonstrating the viability of the concept for pattern classification.
This paper will present an extension of the BrainScaleS adex neuron model  that allows the replication of coincidence detection between basal and apical dendritic segments similar to those observed in experiments . The circuits are fully plastic and can be tuned during the experiment according to plasticity rules executed by the local plasticity processing unit , while still performing the network emulation at an acceleration factor of . A first prototype chip has been sent to manufacturing at the time of this writing.
The remaining sections of the paper are organized as follows: Section II gives an introduction to the BrainScaleS 2 accelerated analog neural network hardware implementation, Section III describes the theory as well as the circuit implementation of the multi-compartment extensions. Section IV shows simulation results demonstrating the presented circuits’ capabilities. Section V discusses the built-in plasticity and possible learning algorithms. The paper closes with Section VI, presenting our conclusions and outlook.
Ii Accelerated Analog Neuromorphic Hardware
The presented multi-compartment circuits are part of a larger research project which aims to develop the second generation BrainScaleS neuromorphic hardware as part of the European hbp. The basic neuromorphic building block of every BrainScaleS system is the hicann chip. It contains the neuron and synapse circuits as well as a digital communication network. While the first generation is implemented in standard cmos technology, the second generation uses a smaller feature size, which enables, among others, the inclusion of a ppu to implement hybrid plasticity .
Fig. 1 shows the basic structure of the BrainScaleS neuromorphic asic. The micro-photograph is the current version of the BrainScaleS chip and serves only as an illustration, since the basic structure of the second generation BrainScaleS asic, which is the version referred to in this paper, will be very similar. The synapses are arranged in a two-dimensional array. All synapses in a column share their output, while two adjacent rows share the same group of pre-synaptic input signals. There are 512 rows all-together, each group of two is connected to 64 pre-synaptic neurons by the means of the digital communication network. The inputs to the upper and lower block are independent from each other. Each block can receive events from a maximum of 8192 different pre-synaptic neurons.
To be able to create neurons of different sizes, each column of synapses together with the neuron compartment circuits in the center of the chip has an adjustable membrane capacitance which can be connected to the neighboring compartment circuit by an electronic switch111All switches are built from standard cmos transmission gates.. A second set of switches allows to connect adjacent neuron compartments in the upper and lower block, doubling the number of available pre-synaptic inputs to the neuron. The maximum number of pre-synaptic neurons that can project to a single neuron is 16k. The pre-synaptic neurons can be located on the same or on remote chips, either on the same or different silicon wafers .
Fig. 2 illustrates the synaptic input of the neuron with its associated synapses as well as the temporal relationship of the related signals. It shows that a column of synapses (rotated in the figure to fit the page format) is connected to the two dendritic input lines of the neuron compartment, labeled A and B. The enlargement in the lower half of Fig. 2 depicts the different functional blocks within each synapse. At its core is a memory storing the current weight of the synapse. A weight of zero means there is effectively no connection between the pre- and post-synaptic neurons, but the correlation between pre- and post-synaptic events is still being monitored222This will be covered in detail in Section V..
The neuron compartment circuit emulates the different ion channels. The voltage on the membrane capacitance reflects the momentary membrane voltage of the compartment. The conductances and capacitances are scaled such that all time constants are a factor of 1000 shorter compared to biology. Hence the addition accelerated in the designation of the BrainScaleS model.
A time-multiplexed scheme is used to allow the high number of inputs per row. The communication network delivers pre-synaptic events with a maximum rate of . Each row of synapses receives pre-synaptic events from up to 64 different pre-synaptic neurons through said network. The synapses receive the events via one shared input bus per row, transmitting a pre-synaptic address to identify the pre-synaptic neuron. Each synapse stores a pre-synaptic address that is compared to the address presented on the input bus each time a pre-synaptic event is transmitted. In case of an address match, the synapse uses a pulse of a precise duration of to sink current from the dendritic input of the neuron compartment circuit it is connected to. The amplitude of the current pulse is proportional to the weight stored in a separate memory within the synapse. Depending on a row-wise configuration setting, the synapses use one of two available dendritic inputs, named A and B in Fig. 2. This allows each neuron compartment to accommodate two different synaptic time constants and reversal potentials. The capacitance of the dendritic input line acts as an integrator of all synaptic current pulses. An adjustable resistor recharges the dendritic line capacitance continuously, thereby setting the synaptic time constant. Due to the acceleration factor the synaptic time constant is typically about , approximately three orders of magnitude slower than the synaptic current pulses.
By storing not only their weight, but also part of their pre-synaptic neuron address, the content of the synapse memories defines the local network topology. To program these memories, a custom simd micro-processor (ppu) is connected to the synapse array at the opposite edge of the compartment circuits. Fig. 3 illustrates this arrangement. It is described in detail in .
Iii Multi-Compartment Neurons
Iii-a Conceptual Background
The accelerated analog neuron model presented in Section II can be used to emulate point-neuron-based network models with a biologically realistic fan-in of more than 10k pre-synaptic neurons. To use a physical model with such a high number of inputs one has to consider the linear character of the synaptic input in the point neuron model. Identical synapses generate the same psp, thus having the same potential contribution to the firing of the soma. A contribution which decreases with the total number of synapses, since in the BrainScaleS physical model increasing the size of the neuron automatically decreases the psp of a single synapse. This is caused by the growth of the membrane capacitance due to the larger number of neural compartment circuits connected together.
Therefore, the more synapses a neuron has, the more pre-synaptic action potentials must arrive in synchrony to reliably relay an input pattern. On the other hand it is desirable to use sparse coding in neural networks[35, 36]. The energy consumption in the BrainScaleS physical model is directly linked to the sparseness of the neural code used.
Arguments against the simple linear addition of all synaptic inputs can also be found in biology. It has been observed that many neo-cortical cells are subject to a dense background firing from thousands of synapses as well as that microscopic sources of true noise are present at each synapse.
Recent findings have shed some light on different non-linear mechanisms within the dendritic tree of the neuron, most notably the capability of the dendritic membrane to generate different spike types in distinct parts of the dendritic tree of a neuron. These properties provide a possible solution to the problem outlined above. A hypothesis reviewed in : “Clustering of synaptic inputs in space (and time) improves the chances for reaching the dendritic threshold for firing a regenerative (amplified) response and provides the opportunity for faster and more frequent cooperation among synaptic contacts involved in the same computational task.” Therein the authors furthermore elaborate: “Instead of thousands of synaptic inputs, the pyramidal cell requires only a correct set of 50 active synaptic contacts to trigger a regenerative dendritic response (e.g. NMDA/plateau potential)”. Fig. 4, taken from , summarizes the three kinds of spikes one can observe within a cortical pyramidal neuron.
nmda-receptors are typically located in the thin, distal parts of the tuft, oblique and basal dendrites. They are responsible for nmda spikes (). Since they are usually co-located with sodium channels, the resulting waveforms resemble the nmda pp shown in Fig. 4A. Triggered by a sufficiently localized synaptic input of approximately 10 to 50 pre-synaptic action potentials, it strongly increases the membrane conductance for a period ranging from several tens to hundreds of milliseconds. Due to the fact that nmda channels are glutamate receptors with voltage dependent magnesium blocks, the nmda pp is a strongly non-linear function of the pre-synaptic input. If the dendritic membrane stays below its threshold, only a sub-threshold psp is observed.
The presented in-silico emulation has been guided by these observations. Because the design of our physical modeling system already incorporates the interconnection of compartments to implement a scalable number of synapses per neuron it was a natural step to include an extension that incorporates dendritic structures with active components. In the subsequent sections we focus on pyramidal neurons becuase of their supposed role in cortical information processing, as discussed for example in .
The implementation of the multi-compartment concepts introduced in Section III-A into the BrainScaleS 2 neuron requires two additional features for the existing neuron circuit : an emulation of the effect of the nmda and calcium ion channels as well as controllable inter-compartmental conductances.
The previous neuron implementations of our research group, beginning with the Spikey neuron , and including our previous multi-compartment chip , are only capable of emulating a sodium-like spike. This is done by continuously comparing the membrane voltage against an adjustable threshold voltage. If the threshold voltage is crossed, a spike is generated and the membrane is connected to the reset potential by a very high conductance. This condition is held for an adjustable amount of time to generate the refractory period of the neuron. After the refractory time has passed, the connection to the reset is released and the membrane is controlled by the interplay of synaptic input and leakage potentials again.
Fig. 5 illustrates the operational principle of the ion channel circuit. It is based on a unified emulation circuit for the three different neuronal spike types listed in Fig. 4. The ion channel circuit uses two adjustable settings for its reversal potential as well as its conductance. The active setting is controlled by a voltage comparator (Comp), which continuously compares the membrane voltage against an adjustable threshold. If the threshold voltage is crossed, the ion channel circuit switches to the alternate setting. The output signal of the comparator passes through a mono-flop (MF) which ensures that the ion channel circuit switches to its alternate setting for a defined period of time. In the presented implementation this time interval is controlled by a digital counter, allowing a wide dynamic range from sub-milliseconds to several hundreds of milliseconds in biological time. The ion-channel itself is built from an ota circuit, emulating the channel conductance. Electronic switches connect one of the two electrical parameter sets to the ota. The parameters are part of the analog parameter storage memory associated with the neuron compartment circuits. This memory holds 24 analog parameters for each individual neuron compartment. In addition to parameter tuning, the values stored in the analog memory are also used to compensate process and fixed-pattern variations.
In Fig. 5 the circuit is configured for the emulation of nmda channels: the threshold is set to the gating voltage of the nmda receptor (), the ion channel emulation is switched from the leakage setting ( and ) to the setting for an nmda pp ( and ). The right half of the figure shows a transistor-level simulation of this circuit, demonstrating the effect of the voltage-gated nmda channel on the membrane voltage: as soon as multiple psp pile up and reach the nmda threshold, the conductance mode is switched and the nmda conductance pulls the membrane quickly up to the nmda reversal potential where it stays until has passed and the ion channel emulation switches back to the leakage parameters, pulling the membrane back to the compartment’s leakage potential .
By changing the values of the threshold as well as the two parameter sets for the ion channel and the time constant of the mono-flop other kinds of spikes can be emulated as well. For example, in the sodium case the threshold equates to the firing threshold and the time constant to the refractory time. Instead of using the nmda parameters the ion channel circuit is set to maximum conductance and the reversal potential to the reset voltage. This will be shown in more detail in Section IV.
In the current revision of the BrainScaleS 2 chip, each neuron compartment circuit contains one instance of the functional unit described above. Therefore each compartment can now be configured to generate either nmda, calcium or sodium spikes. All three spikes also generate digital signals that can be routed as events to other parts of the system, which are typically but not exclusively the pre-synaptic inputs of unrelated neurons. Thus, they also take part in the coincidence detection mechanisms used for plasticity (see Section V). Since all parameters are freely adjustable within the available ranges, settings which do not resemble biological examples can be realized as well. If no voltage-gated ion channels are required, the circuit can be disabled.
The presented models are still simplistic and do not take into account some of the known features of their biological examples. For example, the glutamate concentration at the distal dendrite modulates the length of the nmda pp. Also, the channel emulation is only voltage gated while the real nmda-R molecule is a glutamate receptor with a voltage dependent magnesium block.
The second extension is an additional interconnect to create larger neurons from a set of neuron compartments. The shaded part in Fig. 6 shows the necessary components: an adjustable conductance per neuron compartment and some switches. The new shared line represents the somatic membrane. Each neuron compartment can be connected to it via an adjustable conductance which represents the conductance between the distal dendrite and the soma. Usually, not all neuron compartments within a neuron block (see Fig. 1) are supposed to be part of the same neuron. Therefore, there are switches built into the somatic line at regular distances (every four compartments in Fig. 6) which allow its separation into different neurons. The somatic membrane by itself does not contain any active circuits, neither does it have an associated membrane capacitance. It acquires this functionality by a connection to a neuron compartment which is configured to have an infinite conductance between its compartmental membrane and the somatic membrane. To achieve the effect of this infinite conductance, or zero resistance, a bypass switch exists in parallel to each adjustable conductance.
Including said extensions into the basic neuron block structure illustrated in Fig. 1, the neuron can now be configured to emulate non-linear, multi-compartment neurons. By including calcium spikes it is possible to emulate more complex neurons, like for example layer 5 pyramidal neurons . Fig. 7A illustrates this concept: the basic structure of a pyramidal neuron is replicated using the presented circuits. The individual compartments are connected by the adjustable inter-compartment conductances introduced in Fig. 6, depicted by resistor symbols. The neuron model consists of a set of distal tuft and basal dendrites containing nmda receptors, giving them the ability to create nmda pps if the nmda threshold is reached. All basal distal dendrites are connected to the soma, which is configured to generate sodium spikes to emulate the axon hillhock. The distal tuft dendrites converge at a separate junction, emulating the apical dendrite. The apical dendrite connects to the soma via a compartment configured for calcium spike generation. This allows the electronic neuron model to detect coincidences between its basal and distal input, similar to the measurements of layer 5 pyramidal neurons reported in. Section IV provides simulation results illustrating this mechanism.
Fig. 7B depicts the same configuration of the neuron compartment circuits, but shown as they are arranged in the physical layout of the chip (see Fig. 1). Both neuron blocks, the upper and the lower, are used. The somatic line in the upper block emulates the apical dendrite, the one in the lower block the soma. The lower left compartment is configured to generate sodium spikes. Therefore, the bypass switch for the resistor connecting it to the somatic line is closed. Its membrane capacitance becomes the somatic capacitance of the neuron. If the voltage on this capacitance crosses the sodium threshold programmed into the compartment, the neuron will fire a spike and the soma capacitance will be pulled down to the reset potential. The nmda compartments (marked red in Fig. 7B) in the lower block emulate the distal basal dendrites, the ones in the upper block the distal tuft dendrites.
Two compartments, one in the upper and one in the lower block, are used to connect the apical dendrite with the soma (shown in blue). This is accomplished by closing the vertical switch between them. This directly connects both their membrane capacitances and both calcium spike mechanisms share the same membrane voltage, which is isolated from the soma as well as the apical dendrite by an adjustable conductance each. The calcium spike generation can use either one or both of the spike mechanisms in the two compartments, which provides additional possibilities for better approximating the correct calcium waveform by combining multiple time constants and conductances. The possible neuron models are not limited to the pyramidal neuron example. For instance, it is also possible to create a branch in the apical dendrite, or to have several distinct dendrites.
This section provides results from circuit simulations of the neuron and its multi-compartment extensions. The simulation setup includes four neuron compartments with the corresponding multi-compartment circuits and eight synapses for each compartment. Only the part of the system that is essential to the new multi-compartment functionality is simulated at transistor level to reduce computation time. The simulated circuits match those of the prototype asic. The Spectre simulator333Cadence Design Systems, Inc., San Jose, CA, USA is used with device characterization data provided by TSMC444Taiwan Semiconductor Manufacturing Company, Ltd., Hsinchu, Taiwan for the simulations.
A behavioral model is implemented for the mono-flop which triggers the start of the refractory period and the alternate conductance mode (Fig. 5) when it receives a signal from the spike comparator. In the chip, the digital configuration is stored in local sram while the analog parameter memory provides currents and voltages to the respective circuits. In simulation, the SRAM is implemented at transistor level and each cell is initialized to the required value for the corresponding neuron setup. The analog parameter memory is simulated as a behavioral model which consists of the output stage for current and an effective ideal capacitance and resistor for voltage parameters .
The chip design provides the possibility to stimulate one neuron by external current in addition to spiking input that reaches the compartments via the synapse circuits. In the simulation, current and voltage signals are provided to implement these inputs to the neurons and synapses as the ideal version of the input that is seen by the circuits during operation. In particular, the pre-synaptic enable signal and neuron address (Fig. 2) are provided to each synapse to initiate synaptic events while the current stimulus is injected into a shared input line. One neuron at a time is configured to receive input from this line.
Voltage readout for an arbitrary compartment is possible via a dedicated read-out path in the chip design. This path is not included in the simulation to reduce simulation time and the signals that are shown are recorded directly from the corresponding capacitors in the circuit.
Fig. 8A shows the basic functionality of a multi-compartment configuration. One compartment receives two inputs of different strength. The exponential term of the adex implementation is enabled for the compartment which receives the input. This term generates a current onto the membrane that increases as an exponential function of the membrane potential itself. It acts as a soft threshold  in addition to the explicit firing threshold in each compartment (Fig. 5). The second input is sufficiently strong to cause the membrane voltage of the stimulated compartment to exceed the threshold of the exponential term in that compartment and induce a spike and reset. The reset is configured to have a short refractory period and a low reset voltage, which corresponds to typical point-neuron models with Na spikes, e.g. the adex model . The neighboring compartment is passively pulled up via the inter-compartment conductance. The upswing of the membrane voltage during an action potential is not captured in the implemented circuit as is usual for low-dimensional spiking neuron models (cf. ), which is particularly beneficial for a hardware implementation as it allows for a better utilization of the available voltage range for the neural sub-threshold dynamics.
The refractory time, reset voltage and threshold can be configured individually for each compartment (Fig. 8B), which is central to the implementation of active dendrites (Section III). Here, the reset potential is set above the threshold and the reset duration is set to three values between and . Since the reset conductance is configured to be greater than the leak conductance, this setting effectively serves as an additional positive input current to the neighboring compartment, which is being pulled up passively.
A demonstration of directed coincidence detection is shown in Fig. 8C. Two compartments, one with a Na-like (short reset) and one with an NMDA-like (plateau potential) configuration, are connected by a conductance and each compartment is stimulated by distinct synaptic input. The circuit parameters are adjusted in such a way that the Na compartment emits spikes for single synaptic inputs during the high state of the NMDA compartment, but does not when the NMDA compartment is in its inactive state.
The features described above (Fig. 8A–C) are used to implement a functional behavior which is similar to that of layer 5 pyramidal neurons (Fig. 8D) described in . Therein its author hypothesizes that pyramidal neurons in the cortex act as coincidence detectors for their basal and apical inputs. This proposed mechanism employs the active nature of Ca and NMDA spikes (Fig. 4) to allow a non-linear interaction of synaptic input to opposite poles of the neuron. Fig. 8D shows how a dendritic stimulus leads to a marginal effect at the soma, while a somatic stimulus leads to a single action potential. Both inputs combined trigger a burst. This functionality is emulated using the active components in the presented circuits (Fig. 8E). Synaptic input to the NMDA compartment induces a psp which propagates to the other compartments and is attenuated along the way. Current stimulus into the Na compartment (Fig. 8F) is set to cause a single spike. When both inputs are applied simultaneously the voltage in both dendritic compartments crosses the respective threshold, pulling up the membrane voltage in the Na compartment and causing a burst (Fig. 8G).
This demonstrates how the presented physical implementation is configured in analogy to a biological use case, emulating the dendritic structure by a series of connected compartments and using the introduced extensions of the BrainScaleS neuron model (Section III) to emulate the active nature of the Ca and NMDA spikes in the biological reference. The simulation shows that using this structural analogy one can parameterize the circuit to achieve a functional analogy, in this case the implementation of a non-linear coincidence detection for inputs into different locations of a single neuron.
In Section III the basic concept of the BrainScaleS accelerated analog neuromorphic network chips has been presented, omitting one important aspect: plasticity. Similar to other neuromorphic devices, e.g. [45, 46], correlation measurement between pre- and post-synaptic events is used to implement learning. In Fig. 2 and Fig. 3 two key structures implementing plasticity555We are restricting this chapter to the multi-compartment related aspects of long-term plasticity. The BrainScaleS chips also implement short-term synaptic plasticity . are shown: the correlation sensor within each synapse and the ppu at the edge of each synapse array. The correlation sensor measures the exponentially weighted temporal difference of each pre-post and post-pre spike pair666The correlation sensor implements a nearest-neighbors scheme. and stores it locally in each synapse. The ppu can read back these causal and anti-causal correlation data as well as the current weights and addresses of the synapses. It executes a software-defined algorithm to determine new weights and possibly new addresses. It can also update all parameters of a configured neuron circuit like Fig. 7, e.g. modify nmda plateau durations, calcium threshold voltages or reset conductances. A detailed description of how these circuits interact to implement a flexible hybrid plasticity concept can be found in .
Measuring correlation between pre- and post-synaptic events is frequently used to implement local learning in neuromorphic hardware  based on the strengthening of causal connections, i.e. synapses that were active in the time frame before the firing of the post-synaptic event. In large neurons with linear dendrites, this becomes increasingly difficult because of the diminishing effect a single synapses has on the firing of the post-synaptic event. The non-linear model using nmda pps may provide better signatures for plasticity. Only about 10 to 50 synapses located on a common distal dendrite are needed to evoke a plateau potential. Therefore, replacing the nmda event as the post-synaptic event for plasticity provides a clear learning signal even with thousands of synapses connected to the neuron. The presented hardware model allows to use all spike types as post-synaptic signals for plasticity, including nmda for the synaptic columns configures as distal dendrites.
Grouping synapses onto distal dendrites with non-linear, active mechanisms solves the problem of the single synaptic event drowning in the overall synaptic input of the cell, but creates a new one: which synapses should be grouped on a dendrite? The presented neuromorphic hardware realizes an efficient platform for testing algorithms using the combination of local correlation measurement and the possibility to quickly change the pre-synaptic input of a dendrite. This implements a hardware analogy to the kind of structural plasticity created by the growth of axons and dendrites and the formation of axonal boutons and dendritic spines.
Fig. 9 visualizes the basic concept. It presents an example where each row of synapses gets input from a group of neurons with similar, related information, e.g. part of an upstream layer or a subset of neighboring neurons within the layer. The ppu assigns random addresses to the synapses of such a row (rows two and three in the example). While the network emulation is continuously operating the synapses in each column measure the correlation between the nmda pps and their input. The ppu monitors these correlation measurements and tags synapses with high correlation results to be established as working synapses, i.e. assigns them a non-zero weight (bold numbers) while it reassigns new random pre-synaptic input to the synapses showing weak correlation numbers. In addition, it can also reassign previously established synapses if their weight has fallen below a threshold, i.e. if their correlation has weakened over time.
Although a similar net result could be achieved by starting with a fully connected network and pruning unused connections by stdp, a much larger number of synapses would be needed initially and a subsequent re-mapping of the remaining non-zero synapses onto the hardware would be necessary to realize any benefit form the pruning.
It is possible to route the post-synaptic firing signal of one compartment, for example the soma, to synapses of different compartments. This allows to implement the functional analogy of back-propagating action potentials from the soma to the dendrites. In a supervised learning scenario this may be used to relate different kinds of teacher signals that modulate somatic firing with the synaptic composition of the neuron. Similar models to implement biologically plausible back-propagation learning schemes have been proposed. Due to the acceleration factor of 1000 the chip can test a multitude of possible synaptic configurations per dendrite while still being faster than biological real time.
This paper presents extensions to the BrainScaleS 2 neuron model for non-linear dendrites and structured neurons using multiple compartments. The purpose of these extensions is not the emulation of the full three-dimensional structure of the neuron but its reduction to a minimal electronic model which captures the essential features of such a multi-compartment structure with active, non-linear dendrites. We think that the non-linearity created by the different kinds of spikes in combination with a flexible multi-compartment structure will significantly extend the capability of the BrainScaleS 2 system to help investigate the intersection between biologically inspired hypotheses of information processing, machine-learning derived approaches and the efficient implementation of these kinds of processing in dedicated hardware systems.
The ability of a single neuron to act as a coincidence detector and the availability of somatic spike information at distal dendrites is expected to facilitate the mapping of established machine learning approaches to large-scale spiking systems. Future computing based on neuromorphic hardware might also benefit from the presented level of biological realism, since it could help in creating efficient local learning strategies derived from the biological example. Most likely, if these strategies are identified and proven, the neuromophic systems could be simplified again. Not all of the biological features implemented in the presented model will be needed for each application.
The fully parallel and accelerated nature of the system supports a fast investigation of spiking systems with highly different time scales of neural and plasticity dynamics. Due to its analog implementation it will keep the advantages of neuromorphic hardware like low power-consumption and robustness against localized defects. Since it is spike based and continuous time it might be useful for studying spatio-temporal problems.
The multi-compartment extensions will not change the power-consumption of the hicann chip significantly. The energy needed per synaptic transmission depends strongly on network topology and activity and is of the order of magnitude of . The area used by the extensions is less than ^2 per neuron compartment. This is approximately of the total area used by neuron and synapse circuits.
The introduced extensions of the BrainScaleS neuron model capture essential features of complex biological structures. Although we do not yet fully understand the purpose and the function of the biological details we are optimistic that the presented models will allow insight into the functional possibilities of multi-layered networks built from multi-compartment neurons possessing non-linear active dendrites. By evaluating the behavior of such networks using a multitude of possible plasticity schemes, we expect to gain insight into which features are relevant for functional performance. Future hardware generations might utilize these insights for systems that incorporate novel nano-electronic components.
The presented circuits have all been implemented in silicon using a low-power cmos technology and are currently being manufactured. As soon as funding allows they will be integrated in the wafer-based BrainScaleS 2 system which will then combine the speed of accelerated neuromorphic hardware with the substantial network size achievable by wafer-scale integration. The presented circuits and concepts should also be transferable to smaller process geometries.
In the meantime, a single chip implementation will soon be available to all interested researchers as an experimental platform for ideas inspired by biology and machine learning and to prepare the ground for future, non-Turing computing substrates.
The authors wish to express their gratitude to Andreas Grübl, Andreas Hartel, Syed Aamir, Christian Pehle, Korbinian Schreiber, Sebastian Billaudelle, Gerd Kiene, Matthias Hock, Simon Friedmann and Markus Dorn for their participation in the development of the BrainScaleS 2 asic and system.
They also want to thank their collaborators Sebastian Höppner from TU Dresden and Tugba Demirci from EPFL Lausanne for their contributions to the BrainScaleS 2 prototype asic.
This work has received funding from the European Union Seventh Framework Programme ([FP7/2007-2013]) under grant agreement no 604102 (HBP), 269921 (BrainScaleS), 243914 (Brain-i-Nets), the Horizon 2020 Framework Programme ([H2020/2014-2020]) under grant agreement 720270 (HBP) as well as from the Manfred Stärk Foundation.
Viii Author Contribution
-  Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, may 2015.
-  N. Jones, “Computer science: The learning machines,” Nature, vol. 505, pp. 146–148, 2014.
-  J. Hawkins and S. Ahmad, “Why neurons have thousands of synapses, a theory of sequence memory in neocortex,” Frontiers in Neural Circuits, vol. 10, p. 23, 2016. [Online]. Available: http://journal.frontiersin.org/article/10.3389/fncir.2016.00023
-  The NEST Initiative – Website, “Website,” http://www.nest-initiative.org, 2008.
-  M. Hines, J. W. Moore, and T. Carnevale, “Neuron,” 2008. [Online]. Available: http://neuron.duke.edu
-  H. Markram, “The blue brain project,” Nature Reviews Neuroscience, vol. 7, no. 2, pp. 153–160, 2006.
-  M. A. Petrovici, J. Bill, I. Bytschok, J. Schemmel, and K. Meier, “Stochastic inference with spiking neurons in the high-conductance state,” Physical Review E, vol. 94, no. 4, October 2016. [Online]. Available: http://journals.aps.org/pre/abstract/10.1103/PhysRevE.94.042312
E. Neftci, S. Das, B. Pedroni, K. Kreutz-Delgado, and G. Cauwenberghs, “Event-driven contrastive divergence: neural sampling foundations,”Frontiers in neuroscience, vol. 9, 2015.
-  D. E. Rumelhart, G. E. Hinton, and W. R.J., “Learning internal representations by error propagation,” Parallel Distributed Processing: Explorations in the Microstructures of Cognition, vol. I, pp. 318–362, 1986.
S. M. Bohte, J. N. Kok, and H. L. Poutré, “SpikeProp: backpropagation for networks of spiking neurons,” inESANN 2000, 8th European Symposium on Artificial Neural Networks, Brugres, Belgium, April, 2000.
-  E. Hunsberger and C. Eliasmith, “Spiking deep networks with LIF neurons,” ArXiv e-prints, Oct. 2015.
-  M. Helias, S. Kunkel, G. Masumoto, J. Igarashi, J. M. Eppler, S. Ishii, T. Fukai, A. Morrison, and M. Diesmann, “Supercomputers ready for use as discovery machines for neuroscience,” Frontiers in Neuroinformatics, vol. 6, no. 26, 2012. [Online]. Available: http://www.frontiersin.org/neuroinformatics/10.3389/fninf.2012.00026/abstract
M. Abadi et al.
, “TensorFlow: Large-scale machine learning on heterogeneous distributed systems,” 2015. [Online]. Available:http://download.tensorflow.org/paper/whitepaper2015.pdf
-  A. Nokland, “Direct feedback alignment provides learning in deep neural networks,” in NIPS, 2016.
-  S. D. Antic, W.-L. Zhou, A. R. Moore, S. M. Short, and K. D. Ikonomu, “The decade of the dendritic NMDA spike,” Journal of Neuroscience Research, vol. 88, no. 14, pp. 2991–3001, 2010. [Online]. Available: http://dx.doi.org/10.1002/jnr.22444
-  M. Schiess, R. Urbanczik, and W. Senn, “Somato-dendritic synaptic plasticity and error-backpropagation in active dendrites.” PLoS ComputBiol, vol. 12, 2016.
-  A. G. Brown, “Electric current flow in excitable cells. by j. j. b. jack, d. noble, r. w. tsien. clarendon press, oxford, 1975. pp. xv+502. £18,” Quarterly Journal of Experimental Physiology and Cognate Medical Sciences, vol. 61, no. 1, pp. 75–75, 1976. [Online]. Available: http://dx.doi.org/10.1113/expphysiol.1976.sp002339
-  M. E. Larkum, T. Nevian, M. Sandler, A. Polsky, and J. Schiller, “Synaptic integration in tuft dendrites of layer 5 pyramidal neurons: a new unifying principle,” Science, vol. 325, no. 5941, pp. 756–760, 2009.
-  G. Indiveri et al., “Neuromorphic silicon neuron circuits,” Frontiers in Neuroscience, vol. 5, no. 0, 2011. [Online]. Available: http://www.frontiersin.org/Journal/Abstract.aspx?s=755&name=neuromorphicengineering&ART_DOI=10.3389/fnins.2011.00073
-  J. Schemmel et al., “Live demonstration: A scaled-down version of the brainscales wafer-scale neuromorphic system,” in Proceedings of the 2012 IEEE International Symposium on Circuits and Systems (ISCAS), May 2012, pp. 702–702.
-  S. Friedmann, J. Schemmel, A. Grübl, A. Hartel, M. Hock, and K. Meier, “Demonstrating hybrid learning in a flexible neuromorphic hardware system,” IEEE Transactions on Biomedical Circuits and Systems, vol. 11, no. 1, pp. 128–142, 2017.
-  J. Schiller, G. Major, H. J. Koester, and Y. Schiller, “NMDA spikes in basal dendrites of cortical pyramidal neurons,” Nature, vol. 404, no. 6775, pp. 285–289, 2000.
-  N. Qiao, H. Mostafa, F. Corradi, M. Osswald, F. Stefanini, D. Sumislawska, and G. Indiveri, “A re-configurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128k synapses,” Frontiers in Neuroscience, vol. 9, no. 141, 2015.
-  G. Rachmuth, H. Z. Shouval, M. F. Bear, and C.-S. Poon, “A biophysically-based neuromorphic model of spike rate-and timing-dependent plasticity,” Proceedings of the National Academy of Sciences, vol. 108, no. 49, pp. E1266–E1274, 2011.
-  H. You and D. Wang, “Neuromorphic implementation of attractor dynamics in decision circuit with nmdars,” in Circuits and Systems (ISCAS), 2016 IEEE International Symposium on. IEEE, 2016, pp. 369–372.
A. Banerjee, S. Kar, S. Roy, A. Bhaduri, and A. Basu, “A current-mode spiking neural classifier with lumped dendritic nonlinearity,” in2015 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2015, pp. 714–717.
-  S. Hussain and A. Basu, “Morphological learning in multicompartment neuron model with binary synapses,” in Circuits and Systems (ISCAS), 2016 IEEE International Symposium on. IEEE, 2016, pp. 2527–2530.
-  J. Touboul and R. Brette, “Dynamics and bifurcations of the adaptive exponential integrate-and-fire model,” Biological Cybernetics, vol. 99, no. 4, pp. 319–334, Nov 2008. [Online]. Available: http://dx.doi.org/10.1007/s00422-008-0267-4
-  M. Larkum, “A cellular mechanism for cortical associations: an organizing principle for the cerebral cortex,” Trends in neurosciences, vol. 36, no. 3, pp. 141–151, 2013.
-  S. Friedmann, “A new approach to learning in neuromorphic hardware,” Ph.D. dissertation, Ruprecht-Karls-Universität Heidelberg, 2013.
-  H. Markram et al., “Introducing the human brain project,” Procedia Computer Science, vol. 7, pp. 39 – 42, 2011, proceedings of the 2nd European Future Technologies Conference and Exhibition 2011 (FET 11). [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1877050911006806
-  J. Schemmel, D. Brüderle, A. Grübl, M. Hock, K. Meier, and S. Millner, “A wafer-scale neuromorphic hardware system for large-scale neural modeling,” in Proceedings of the 2010 IEEE International Symposium on Circuits and Systems (ISCAS), 2010, pp. 1947–1950.
-  S. Millner, A. Grübl, K. Meier, J. Schemmel, and M.-O. Schwartz, “A VLSI implementation of the adaptive exponential integrate-and-fire neuron model,” in Advances in Neural Information Processing Systems 23, J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, Eds., 2010, pp. 1642–1650.
-  S. D. Antic, W.-L. Zhou, A. R. Moore, S. M. Short, and K. D. Ikonomu, “The decade of the dendritic NMDA spike,” Journal of Neuroscience Research, vol. 88, no. 14, pp. 2991–3001, 2010. [Online]. Available: http://dx.doi.org/10.1002/jnr.22444
-  G. Palm, “Neural associative memories and sparse coding,” Neural Networks, vol. 37, pp. 165–171, 2013.
-  A. A. Faisal, L. P. Selen, and D. M. Wolpert, “Noise in the nervous system,” Nature reviews neuroscience, vol. 9, no. 4, pp. 292–303, 2008.
-  M. London and M. Häusser, “Dendritic computation,” Annu. Rev. Neurosci., vol. 28, pp. 503–532, 2005.
-  A. Losonczy and J. C. Magee, “Integrative properties of radial oblique dendrites in hippocampal ca1 pyramidal neurons,” Neuron, vol. 50, no. 2, pp. 291–307, 2006.
-  S. A. Aamir, P. Müller, A. Hartel, J. Schemmel, and K. Meier, “A highly tunable 65-nm CMOS LIF neuron for a large-scale neuromorphic system,” in Proceedings of IEEE European Solid-State Circuits Conference (ESSCIRC), 2016.
-  S. Millner, A. Hartel, J. Schemmel, and K. Meier, “Towards biologically realistic multi-compartment neuron model emulation in analog VLSI,” in Proceedings ESANN 2012, 2012.
-  M. Hock, A. Hartel, J. Schemmel, and K. Meier, “An analog dynamic memory array for neuromorphic hardware,” in Circuit Theory and Design (ECCTD), 2013 European Conference on, Sep. 2013, pp. 1–4.
-  R. Brette and W. Gerstner, “Adaptive exponential integrate-and-fire model as an effective description of neuronal activity,” J. Neurophysiol., vol. 94, pp. 3637 – 3642, 2005.
-  W. Gerstner, W. Kistler, R. Naud, and L. Paninski, Neuronal Dynamics. Cambridge University Press, 2014.
N. Kasabov, K. Dhoble, N. Nuntalid, and G. Indiveri, “Dynamic evolving spiking neural networks for on-line spatio-and spectro-temporal pattern recognition,”Neural Networks, vol. 41, pp. 188–201, 2013.
-  T. Pfeil, A.-C. Scherzer, J. Schemmel, and K. Meier, “Neuromorphic learning towards nano second precision,” in Neural Networks (IJCNN), The 2013 International Joint Conference on, Aug 2013, pp. 1–5.
-  J. Schemmel, D. Brüderle, K. Meier, and B. Ostendorf, “Modeling synaptic plasticity within networks of highly accelerated I&F neurons,” in Proceedings of the 2007 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE Press, 2007, pp. 3367–3370.
-  M. Butz and A. van Ooyen, “A simple rule for dendritic spine and axonal bouton formation can account for cortical reorganization after focal retinal lesions,” PLoS Comput Biol, vol. 9, no. 10, p. e1003259, 2013.
-  G. W. Knott, A. Holtmaat, L. Wilbrecht, E. Welker, and K. Svoboda, “Spine growth precedes synapse formation in the adult neocortex in vivo,” Nature neuroscience, vol. 9, no. 9, pp. 1117–1124, 2006.
-  EPFL and IBM, “Blue brain project,” Lausanne, 2008. [Online]. Available: http://bluebrain.epfl.ch/