I Introduction
This paper provides a comprehensive survey of the neuromorphic computing field, reviewing over 3,000 papers from a 35year time span looking primarily at the motivations, neuron/synapse models, algorithms and learning, applications, advancements in hardware, and briefly touching on materials and supporting systems. Our goal is to provide a broad and historic perspective of the field to help further ongoing research, as well as provide a starting point for those new to the field.
Devising a machine that can process information faster than humans has been a driving forces in computing for decades, and the von Neumann architecture has become the clear standard for such a machine. However, the inevitable comparisons of this architecture to the human brain highlight significant differences in the organizational structure, power requirements, and processing capabilities between the two. This leads to a natural question regarding the feasibility of creating alternative architectures based on neurological models, that compare favorably to a biological brain.
Neuromorphic computing has emerged in recent years as a complementary architecture to von Neumann systems. The term neuromorphic computing was coined in 1990 by Carver Mead [1]. At the time, Mead referred to very large scale integration (VLSI) with analog components that mimicked biological neural systems as “neuromorphic” systems. More recently, the term has come to encompass implementations that are based on biologicallyinspired or artificial neural networks in or using nonvon Neumann architectures.
These neuromorphic architectures are notable for being highly connected and parallel, requiring lowpower, and collocating memory and processing. While interesting in their own right, neuromorphic architectures have received increased attention due to the approaching end of Moore’s law, the increasing power demands associated with Dennard scaling, and the low bandwidth between CPU and memory known as the von Neumann bottleneck [2]. Neuromorphic computers have the potential to perform complex calculations faster, more powerefficiently, and on a smaller footprint than traditional von Neumann architectures. These characteristics provide compelling reasons for developing hardware that employs neuromorphic architectures.
Machine learning provides the second important reason for strong interest in neuromorphic computing. The approach shows promise in improving the overall learning performance for certain tasks. This moves away from hardware benefits to understanding potential application benefits of neuromorphic computing, with the promise of developing algorithms that are capable of online, realtime learning similar to what is done in biological brains. Neuromorphic architectures appear to be the most appropriate platform for implementing machine learning algorithms in the future.
The neuromorphic computing community is quite broad, including researchers from a variety of fields, such as materials science, neuroscience, electrical engineering, computer engineering, and computer science (Figure 1). Materials scientists study, fabricate, and characterize new materials to use in neuromorphic devices, with a focus on materials that exhibit properties similar to biological neural systems. Neuroscientists provide information about new results from their studies that may be useful in a computational sense, and utilize neuromorphic systems to simulate and study biological neural systems. Electrical and computer engineers work at the device level with analog, digital, mixed analog/digital, and nontraditional circuitry to build new types of devices, while also determining new systems and communication schemes. Computer scientists and computer engineers work to develop new network models inspired by both biology and machine learning, including new algorithms that can allow these models to be trained and/or learn on their own. They also develop the supporting software necessary to enable the use of neuromorphic computing systems in the real world.
The goals of this paper are to give a thirtyyear survey of the published works in neuromorphic computing and hardware implementations of neural networks and to discuss open issues for the future of neuromorphic computing. The remainder of the paper is organized as follows: In Section II, we present a historical view of the motivations for developing neuromorphic computing and how they have changed over time. We then break down the discussion of past works in neuromorphic computing into models (Section III), algorithms and learning (Section IV), hardware implementations (Section V), and supporting components, including communication schemes and software systems (Section VIB). Section VII gives an overview of the types of applications to which neuromorphic computing systems have been applied in the literature. Finally, we conclude with a forwardlooking perspective for neuromorphic computing and enumerate some of the major research challenges that are left to tackle.
Ii Motivation
The idea of using custom hardware to implement neurallyinspired systems is as old as computer science and computer engineering itself, with both von Neumann [3] and Turing [4]
discussing braininspired machines in the 1950’s. Computer scientists have long wanted to replicate biological neural systems in computers. This pursuit has led to key discoveries in the fields of artificial neural networks (ANNs), artificial intelligence, and machine learning. The focus of this work, however, is not directly on ANNs or neuroscience itself, but on the development of nonvon Neumann hardware for simulating ANNs or biological neural systems. We discuss several reasons why neuromorphic systems have been developed over the years based on motivations described in the literature. Figure
2 shows the number of works over time for neuromorphic computing and indicates that there has been a distinct rise in interest in the field over the last decade. Figure 3 shows ten of the primary motivations for neuromorphic in the literature and how those motivations have changed over time. These ten motivations were chosen because they were the most frequently noted motivations in the literature, each specified by at least fifteen separate works.Much of the early work in neuromorphic computing was spurred by the development of hardware that could perform parallel operations, inspired by observed parallelism in biological brains, but on a single chip [5, 6, 7, 8, 9]. Although there were parallel architectures available, neuromorphic systems emphasized many simple processing components (usually in the form of neurons), with relatively dense interconnections between them (usually in the form of synapses), differentiating them from other parallel computing platforms of that time. In works from this early era of neuromorphic computing, the inherent parallelism of neuromorphic systems was the most popular reason for custom hardware implementations.
Another popular reason for early neuromorphic and neural network hardware implementations was speed of computation [10, 11, 12, 13]. In particular, developers of early systems emphasized that it was possible to achieve much faster neural network computation with custom chips than what was possible with traditional von Neumann architectures, partially by exploiting their natural parallelism as mentioned above, but also by building custom hardware to complete neuralstyle computations. This early focus on speed foreshadowed a future of utilizing neuromorphic systems as accelerators for machine learning or neural network style tasks.
Realtime performance was also a key motivator of early neuromorphic systems. Enabled by natural parallelism and speed of computation, these devices tended to be able to complete neural network computations faster than implementations on von Neumann architectures for applications such as realtime control [14], realtime digital image reconstruction [15], and autonomous robot control [16]. In these cases, the need for faster computation was not driven by studying the neural network architectures themselves or for training, but was more driven by application performance. This is why we have differentiated it from speed and parallelism as a motivation for the development of neuromorphic systems.
Early developers also started to recognize that neural networks may be a natural model for hardware implementation because of their inherent fault tolerance, both in the massively parallel representation and in potential adaptation or selfhealing capabilities that can be present in artificial neural network representations in software [5, 17]. These were and continue to be relevant characteristics for fabricating new hardware implementations, where device and process variation may lead to imperfect fabricated devices, and where utilized devices may experience hardware errors.
The most popular motivation in presentday literature and discussions of neuromorphic systems in the referenced articles is the emphasis on their potential for extremely low power operation. Our major source of inspiration, the human brain, requires about 20 watts of power and performs extremely complex computations and tasks on that small power budget. The desire to create neuromorphic systems that consume similarly low power has been a driving force for neuromorphic computing from its conception [18, 19], but it became a prominent motivation about a decade into the field’s history.
Similarly, creating devices capable of neural networkstyle computations with a small footprint (in terms of device size) also became a major motivation in this decade of neuromorphic research. Both of these motivations correspond with the rise of the use of embedded systems and microprocessors, which may require a small footprint and, depending on the application, very low power consumption.
In recent years, the primary motivation for the development of neuromorphic systems is lowpower consumption. It is, by far, the most popular motivation for neuromorphic computers, as can be seen in Figure 3. Inherent parallelism, realtimeperformance, speed in both operation and training, and small device footprint also continue to be major motivations for the development of neuromorphic implementations. A few other motivations became popular in this period, including a rise of approaches that utilize neural networkstyle architectures (i.e., architectures made up of neuron and synapselike components) because of their fault tolerance characteristics or reliability in the face of hardware errors. This has become an increasingly popular motivation in recent years in light of the use of novel materials for implementing neuromorphic systems (see Section VC).
Another major motivation for building neuromorphic systems in recent years has been to study neuroscience. Custom neuromorphic systems have been developed for several neurosciencedriven projects, including those created as part of the European Union’s Human Brain Project [20], because simulating relatively realistic neural behavior on a traditional supercomputer is not feasible, in scale, speed, or power consumption [21]. As such, custom neuromorphic implementations are required in order to perform meaningful neuroscience simulations with reasonable effort. In this same vein, scalability has also become an increasingly popular motivation for building neuromorphic systems. Most major neuromorphic projects discuss how to cascade their devices together to reach very large numbers of neurons and synapses.
A common motivation not given explicitly in Figure 3 is the end of Moore’s Law, though most of the other listed motivations are related to the consideration of neuromorphic systems as a potential complementary architecture in the beyond Moore’s law computing landscape. Though most researchers do not expect that neuromorphic systems will replace von Neumann architectures, “building a better computer” is one of their motivations for developing neuromorphic devices; though this is a fairly broad motivation, it encompasses issues associated with traditional computers, including the looming end of Moore’s law and the end of Dennard scaling. Another common motivation for neuromorphic computing development is solving the von Neumann bottleneck [22], which arises in von Neumann architectures due to the separation of memory and processing and the gap in performance between processing and memory technologies in current systems. In neuromorphic systems, memory and processing are collocated, mitigating issues that arise with the von Neumann bottleneck.
Online learning, defined as the ability to adapt to changes in a task as they occur, has been a key motivation for neuromorphic systems in recent years. Though online learning mechanisms are still not well understood, there is still potential for the online learning mechanisms that are present in many neuromorphic systems to perform learning tasks in an unsupervised, lowpower manner. With the tremendous rise of data collection in recent years, systems that are capable of processing and analyzing this data in an unsupervised, online way will be integral in future computing platforms. Moreover, as we continue to gain an understanding of biological brains, it is likely that we will be able to build better online learning mechanisms and that neuromorphic computing systems will be natural platforms on which to implement those systems.
Iii Models
One of the key questions associated with neuromorphic computing is which neural network model to use. The neural network model defines what components make up the network, how those components operate, and how those components interact. For example, common components of a neural network model are neurons and synapses, taking inspiration from biological neural networks. When defining the neural network model, one must also define models for each of the components (e.g., neuron models and synapse models); the component model governs how that component operates.
How is the correct model chosen? In some cases, it may be that the chosen model is motivated by a particular application area. For example, if the goal of the neuromorphic device is to utilize the device to simulate biological brains for a neuroscience study on a faster scale than is possible with traditional von Neumann architectures, then a biologically realistic and/or plausible model is necessary. If the application is an image recognition task that requires high accuracy, then a neuromorphic system that implements convolutional neural networks may be best. The model itself may also be shaped by the characteristics and/or restrictions of a particular device or material. For example, memristorbased systems (discussed further in Section
VB1) have characteristics that allow for spiketiming dependent plasticitylike mechanisms (a type of learning mechanism discussed further in Section IV) that are most appropriate for spiking neural network models. In many other cases, the choice of the model or the level of complexity for the model is not entirely clear.A wide variety of model types have been implemented in neuromorphic or neural network hardware systems. The models range from predominantly biologicallyinspired to predominantly computationally driven. The latter models are inspired more by artificial neural network models than by biological brains. This section discusses different neuron models, synapse models, and network models that have been utilized in neuromorphic systems, and points to key portions of the literature for each type of model.
Iiia Neuron Models
A biological neuron is usually composed of a cell body, an axon, and dendrites. The axon usually (though not always) transmits information away from the neuron, and is where neurons transmit output. Dendrites usually (though not always) transmit information to the cell body and are typically where neurons receive input. Neurons can receive information through chemical or electrical transmissions from other neurons. The juncture between the end of an axon of one neuron and the dendrite of another neuron that allows information or signals to be transmitted between the two neurons is called a synapse. The typical behavior of a neuron is to accumulate charge through a change in voltage potential across the neuron’s cell membrane, caused by receiving signals from other neurons through synapses. The voltage potential in a neuron may reach a particular threshold, which will cause the neuron to “fire” or, in the biological terminology, generate an action potential that will travel along a neuron’s axon to affect the charge on other neurons through synapses. Most neuron models implemented in neuromorphic systems have some concept of accumulation of charge and firing to affect other neurons, but the mechanisms by which these processes take place can vary significantly from model to model. Similarly, models that are not biologically plausible (i.e. artificial models that are inspired by neuroscience rather than mimicking neuroscience) typically do not implement axons or dendrites, although there are a few exceptions (as noted below).
Figure 4 gives an overview of the types of neuron models that have been implemented in hardware. The neuron models are given in five broad categories:

Biologicallyplausible: Explicitly model the types of behavior that are seen in biological neural systems.

Biologicallyinspired: Attempt to replicate behavior of biological neural systems but not necessarily in a biologicallyplausible way.

Neuron+Other: Neuron models including other biologicallyinspired components that are not usually included in other neuromorphic neuron models, such as axons, dendrites, or glial cells.

Integrateandfire: A simpler category of biologicallyinspired spiking neuron models.

McCullochPitts: Neuron models that are derivatives of the original McCullochPitts neuron [23] used in most artificial neural network literature. For this model, the output of neuron is governed by the following equation:
(1) where is the output value,
is an activation function,
is the number of inputs into neuron , is the weight of the synapse from neuron to neuron and is the output value of neuron .
A variety of biologicallyplausible and biologicallyinspired neuron models have been implemented in hardware. Components that may be included in these models may include: cell membrane dynamics, which govern factors such as charge leakage across the neuron’s cell membrane; ion channel dynamics, which govern how ions flow into and out of a neuron, changing the charge level of the neuron; axonal models, which may include delay components; and dendritic models, which govern how many pre and postsynaptic neurons affect the current neuron. A good overview of these types of spiking neuron models is given by Izhikevich [24].
IiiA1 BiologicallyPlausible
The most popular biologicallyplausible neuron model is the HodgkinHuxley model [25]. The HodgkinHuxley model was first introduced in 1952 and is a relatively complex neuron model, with fourdimensional nonlinear differential equations describing the behavior of the neuron in terms of the transfer of ions into and out of the neuron. Because of their biological plausibility, HodgkinHuxley models have been very popular in neuromorphic implementations that are trying to accurately model biological neural systems [26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54]. A simpler, but still biologicallyplausible model is the Morris Lecar model, which reduces the model to a twodimensional nonlinear equation [55]. It is a commonly implemented model in neuroscience and in neuromorphic systems [27, 56, 57, 58, 59, 60, 61].
IiiA2 BiologicallyInspired
There are a variety of neuron models that are simplified versions of the HodgkinHuxley model that have been implemented in hardware, including FitzhughNagumo [62, 63, 64] and HindmarshRose [65, 66, 67, 68, 69] models. These models tend to be both simpler computationally and simpler in terms of number of parameters, but they become more biologicallyinspired than biologicallyplausible because they attempt to model behavior rather than trying to emulate physical activity in biological systems. From the perspective of neuromorphic computing hardware, simpler computation can lead to simpler implementations that are more efficient and can be realized with a smaller footprint. From the algorithms and learning method perspective, a smaller number of parameters can be easier to set and/or train than models with a large number of parameters.
The Izhikevich spiking neuron model was developed to produce similar bursting and spiking behaviors as can be elicited from the HodgkinHuxley model, but do so with much simpler computation [70]. The Izhikevich model has been very popular in the neuromorphic literature because of its simultaneous simplicity and ability to reproduce biologically accurate behavior [27, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82]. The MihalaşNiebur neuron is another popular neuron model that tries to replicate bursting and spiking behaviors, but it does so with a set of linear differential equations [83]; it also has neuromorphic implementations [84, 85]. The quartic model has two nonlinear differential equations that describe its behavior, and also has an implementation for neuromorphic systems [86].
IiiA3 Neuron + Other BiologicallyInspired Mechanism
Other biologicallyinspired models are also prevalent that do not fall into the above categories. They typically contain a much higher level of biological detail than most models from the machine learning and artificial intelligence literature, such as the inclusion of membrane dynamics [87, 88, 89, 90, 91], modeling ionchannel dynamics [92, 93, 94, 95, 96, 97, 98], the incorporation of axons and/or dendrite models [99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110], and glial cell or astrocyte interactions [111, 112, 113, 114, 115, 116]. Occasionally, new models are developed specifically with the hardware in mind. For example, a neuron model with equations inspired by the FitzhughNagumo, Morris Lecar, HodgkinHuxley, or other models have been developed, but the equations were updated or the models abstracted in order to allow for ease of implementation in lowpower VLSI [117, 118], on FPGA [119, 120], or using static CMOS [121, 122, 123]. Similarly, other researchers have updated the HodgkinHuxley model to account for new hardware developments, such as the MOSFET transistor [124, 125, 126, 127, 128, 129, 130] or the singleelectron transistor [131].
IiiA4 IntegrateandFire Neurons
A simpler set of spiking neuron models belong to the integrateandfire family, which is a set of models that vary in complexity from relatively simple (the basic integrateandfire) to those approaching complexity levels near that of the Izhikevich model and other more complex biologicallyinspired models [132]. In general, integrateandfire models are less biologically realistic, but produce enough complexity in behavior to be useful in spiking neural systems. The simplest integrateandfire model maintains the current charge level of the neuron. There is also a leaky integrateandfire implementation that expands the simplest implementation by including a leak term to the model, which causes the potential on a neuron to decay over time. It is one of the most popular models used in neuromorphic systems [133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 58, 158, 159, 160, 161, 162, 163, 164]. The next level of complexity is the general nonlinear integrateandfire method, including the quadratic integrateandfire model that is used in some neuromorphic systems [165, 166]. Another level of complexity is added with the adaptive exponential integrateandfire model, which is similar in complexity to the models discussed above (such as the Izhikevich model). These have also been used in neuromorphic systems [167, 168].
In addition to the previous analogstyle spiking neuron models, there are also implementations of digital spiking neuron models. The dynamics in a digital spiking neuron model are usually governed by a cellular automaton, as opposed to a set of nonlinear or linear differential equations. A hybrid analog/digital implementation has been created for neuromorphic implementations [169], as well as implementations of resonateandfire [170] and rotateandfire [171, 172] digital spiking neurons. A generalized asynchronous digital spiking model has been created in order to allow for exhibition of nonlinear response characteristics [173, 174]. Digital spiking neurons have also been utilized in pulsecoupled networks [175, 176, 177, 178, 179]. Finally, a neuron for a random neural network has been implemented in hardware [180],
In the following sections, the term spiking neural network will be used to describe full network models. These spiking networks may utilize any of the above neuron models in their implementation; we do not specify which neuron model is being utilized. Moreover, in some hardware implementations, such as SpiNNaker (see Section VA1), the neuron model is programmable, so different neuron models may be realized in a single neuromorphic implementation.
IiiA5 McCullochPitts Neurons
Moving to more traditional artificial neural network implementations in hardware, there is a large variety of implementations of the traditional McCullochPitts neuron model [23]
. The perceptron is one implementation of the McCullochPitts model, which uses a simple thresholding function as the activation function; because of its simplicity, it is commonly used in hardware implementations
[181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191]. There has also been significant focus to create implementations of various activation functions for McCullochPittsstyle neurons in hardware. Different activation functions have had varying levels of success in neural network literature, and some activation functions can be computationally intensive. This complexity in computation can lead to complexity in hardware, resulting in a variety of different activation functions and implementations that are attempting to tradeoff complexity and overall accuracy and computational usefulness of the model. The most popular implementations are the basic sigmoid function
[192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212] and the hyperbolic tangent function [213, 214, 215, 216], but other hardwarebased activation functions that have been implemented include the rampsaturation function [194], linear [200], piecewise linear [217], step function [194, 218], multithreshold [219][208], the tangent sigmoid function [208], and periodic activation functions [220]. Because the derivative of the activation function is utilized in the backpropagation training algorithm [221], some circuits implement both the function itself and its derivative, for both sigmoid [222, 223, 224, 225, 226, 227] and hyperbolic tangent [224]. A few implementations have focused on creating neurons with programmable activation functions [228] or on creating building blocks to construct neurons [229].Neuron models for other traditional artificial neural network models have also been implemented in hardware. These neuron models include binary neural network neurons [230], fuzzy neural network neurons [231], and Hopfield neural network neurons [232, 233, 234]. On the whole, there have been a wide variety of neuron models implemented in hardware, and one of the decisions a user might make is a tradeoff between complexity and biological inspiration. Figure 5 gives a qualitative comparison of different neuron models in terms of those two factors.
IiiB Synapse Models
Just as some neuromorphic work has focused particularly on neuron models, which occasionally also encapsulate the synapse implementation, there has also been a focus on developing synapse implementations independent of neuron models for neuromorphic systems. Once again, we may separate the synapse models into two categories: biologicallyinspired synapse implementations, which include synapses for spikebased systems, and synapse implementations for traditional artificial neural networks, such as feedforward neural networks. It is worth noting that synapses are typically going to be the most abundant element in neuromorphic systems, or the element that is going to require the most real estate on a particular chip. For many hardware implementations and especially for the development and use of novel materials for neuromorphic, the focus is typically on optimizing the synapse implementation. As such, synapse models tend to be relatively simple, unless they are attempting to explicitly model biological behavior. One popular inclusion for more complex synapse models is a plasticity mechanism, which causes the neuron’s strength or weight value to change over time. Plasticity mechanisms have been found to be related to learning in biological brains.
For more biologicallyinspired neuromorphic networks, synapse implementations that explicitly model the chemical interactions of synapses, such as the ion pumps or neurotransmitter interactions, have been utilized in some neuromorphic systems [235, 236, 237, 238, 239, 240, 241, 67, 242, 243, 244, 245]. Ion channels have also been implemented in neuromorphic implementations in the form of conductancebased synapse models [246, 247, 248, 249, 250, 251]. For these implementations, the detail goes above and beyond what one might see with the modeling of ion channels in neuron models such as HodgkinHuxley.
Implementations for spiking neuromorphic systems focus on a variety of characteristics of synapses. Neuromorphic synapses that exhibit plasticity and learning mechanisms inspired by both shortterm and longterm potentiation and depression in biological synapses have been common in biologicallyinspired neuromorphic implementations [252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262]. Potentiation and depression rules are specific forms of spiketiming dependent plasticity (STDP) [263] rules. STDP rules and their associated circuitry are extremely common in neuromorphic implementations for synapses [264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 252, 276, 277, 256, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 259, 294, 295, 296, 297, 298, 299, 300]. More information on STDP as a learning algorithm and its implementations in neuromorphic systems is provided in Section IV. Synaptic responses can also be relatively complex in neuromorphic systems. Some neuromorphic synapse implementations focus on synaptic dynamics, such as the shape of the outgoing spike from the synapse or the postsynaptic potential [301, 302, 303, 304, 305]. Synapses in spiking neuromorphic systems have also been used as homeostasis mechanisms to stabilize the network’s activity, which can be an issue in spiking neural network systems [306, 307].
A variety of neuromorphic synaptic implementations for nonspiking neural networks have also been implemented. These networks include feedforward multilayer networks [308, 309, 310, 311, 312, 313, 314], winnertakeall [315], and convolutional neural networks [316]. A focus on different learning rules for synapses in artificial neural networkbased neuromorphic systems is also common, as it is for STDP and potentiation and depression rules in spikebased neuromorphic systems. Common learning rules in artificial neural networkbased systems include Hebbian learning [317, 318, 310, 312] and least meansquare [11, 319]. Gaussian synapses have also been implemented in order to help with backpropagation learning rules [320, 321, 322].
IiiC Network Models
Network models describe how different neurons and synapses are connected and how they interact. As may be intuited from the previous sections, there are a wide variety of neural network models that have been developed for neuromorphic systems. Once again, they range from trying to replicate biological behavior closely to much more computationallydriven, nonspiking neural networks. There are a variety of factors to consider when selecting a network model. One of the factors is clearly biological inspiration and complexity of neuron and synapse models, as discussed in previous sections. Another factor to consider is the topology of the network. Figure 6 shows some examples of network topologies that might be used in various network models, including biologicallyinspired networks and spiking networks. Depending on the hardware chosen, the connectivity might be relatively restricted, which would restrict the topologies that can be realistically implemented. A third factor is the feasibility and applicability of existing training or learning algorithms for the chosen network model, which will be discussed in more detail in Section IV. Finally, general applicability of that network model to a set of applications may also play a role in choosing the appropriate network model.
There are a large variety of general spiking neural network implementations in hardware [323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 21, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 86, 523, 524, 525, 169, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570]. These implementations utilize a variety of neuron models such as the various integrateandfire neurons listed above or the more biologicallyplausible or biologicallyinspired models. Spiking neural network implementations also typically include some form of STDP in the synapse implementation. Spiking models have been popular in neuromorphic implementations in part because of their eventdrive nature and improved energy efficiency relative to other systems. As such, implementations of other neural network models have been created using spiking neuromorphic systems, including spiking feedforward networks [571, 572, 573, 574, 575], spiking recurrent networks [576, 577, 578, 579, 580, 581], spiking deep neural networks[582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592]
, spiking deep belief networks
[593], spiking Hebbian systems [594, 595, 596, 597, 598, 599, 600, 601, 602], spiking Hopfield networks or associative memories [603, 604, 605], spiking winnertakeall networks [606, 607, 608, 609, 610, 611], spiking probabilistic networks [612, 613], and spiking random neural networks [614]. In these implementations a spiking neural network architecture in neuromorphic systems has been utilized for another neural network model type. Typically, the training for these methods is done on the traditional neural network type (such as the feedforward network), and then the resulting network solution has been adapted to fit the spiking neuromorphic implementation. In these cases, the full properties of the spiking neural network may not be utilized.A popular biologicallyinspired network model that is often implemented using spiking neural networks is central pattern generators (CPGs). CPGs generate oscillatory motion, such as walking gaits or swimming motions in biological systems. A common use of CPGs have been in robotic motion. There are several neuromorphic systems that were built specifically to operate as CPGs [615, 616, 617, 618, 619], but CPGs are also often an application built on top of existing spiking neuromorphic systems, as is discussed in Section VII.
The most popular implementation by far is feedforward neural networks, including multilayer perceptrons [620, 17, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 8, 631, 632, 633, 634, 635, 636, 637, 19, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 18, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 9, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 16, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 192, 194, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 200, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 206, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 530, 991, 202, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 207, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023]. Extreme learning machines are a special case of feedforward neural networks, where a number of the weights in the network are randomly set and never updated based on a learning or training algorithm; there have been several neuromorphic implementations of extreme learning machines [1024, 1025, 1026, 1027, 1028, 1029]. Another special case of feedforward neural networks are multilayer perceptrons with delay, and those have also been implemented in neuromorphic systems [1030, 1031, 1032, 1033, 1034]. Probabilistic neural networks, yet another specialcase of feedforward neural networks that have a particular type of functionality that is related to Bayesian calculations, have several neuromorphic implementations [1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043]. Singlelayer feedforward networks that utilize radial basis functions as the activation function of the neurons have also been used in neuromorphic implementations [1044, 1045, 1046, 747, 1047, 1048, 825, 826, 1049, 1050, 530, 1051, 1052, 1053]. In recent years, with the rise of deep learning, convolutional neural networks have also seen several implementations in neuromorphic systems [1054, 1055, 1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067, 1068, 1069, 1070, 1071, 1072, 1073, 1074, 1075].
Recurrent neural networks are those that allow for cycles in the network, and they can have differing levels of connectivity, including alltoall connections. Nonspiking recurrent neural networks have also been implemented in neuromorphic systems [1076, 1077, 1078, 1079, 1080, 1081, 1082, 1083, 1084, 1085, 1086, 1087, 1088, 1089, 1090, 7, 1091, 1092, 1093, 813, 829, 1094, 857, 1095, 1096, 1097, 1098, 1099]. Reservoir computing models, including liquid state machines, have become popular in neuromorphic systems [1100, 1101, 1102, 1103, 1104, 1105, 1106]. In reservoir computing models, recurrent neural networks are utilized as a “reservoir”, and outputs of the reservoir are fed into simple feedforward networks. Both spiking and nonspiking implementations exist. Winnertakeall networks, which utilize recurrent inhibitory connections to force a single output, have also been implemented in neuromorphic systems [1107, 1108, 1109]. Hopfield networks were especially common in earlier neuromorphic implementations, as is consistent with neural network research at that time [1110, 1111, 1112, 1113, 1114, 1115, 1116, 1117, 1118, 1119, 1120, 6, 1121, 1122, 759, 1123, 1124, 764, 1125, 1126, 1127, 1128, 1129, 1130, 1131, 813, 1132, 1133, 1134, 15, 1135, 1136, 1137, 5, 13, 1138, 1139, 1140, 1141, 1142, 1143, 1144, 1145, 1146, 1147, 1148], but there are also more recent implementations [838, 1149, 1150, 1151, 1152, 1153, 1154, 1155, 1156, 1157, 1158, 1159]. Similarly, associative memory based implementations were also significantly more popular in earlier neuromorphic implementations [1160, 1161, 1162, 1163, 1164, 1165, 1166, 1167, 1168, 1169, 1170, 1171, 1172, 1173, 1174, 1175, 1176, 1177, 1178, 1179, 1180, 1181, 1053, 1182].
Stochastic neural networks, which introduce a notion of randomness into the processing of a network, have been utilized in neuromorphic systems as well [1183, 1184, 1185, 1186, 1187, 1188, 1189, 1190, 1191, 1192]
. A special case of stochastic neural networks, Boltzmann machines, have also been popular in neuromorphic systems. The general Boltzmann machine was utilized in neuromorphic systems primarily in the early 1990’s
[1193, 12, 1194, 1195, 1196, 1197, 1198, 1199], but it has seen occasional implementations in more recent publications [1200, 1201, 1202, 1203]. A more common use of the Boltzmann model is the restricted Boltzmann machine, because the training time is significantly reduced when compared with a general Boltzmann machine. As such, there are several implementations of the restricted Boltzmann machine in neuromorphic implementations
[1204, 1205, 1201, 1202, 1203, 1206, 1207, 1208, 1209, 1210, 1211]. Restricted Boltzmann machines are an integral component to deep belief networks, which have become more common with increased interest in deep learning and have been utilized in neuromorphic implementations [1212, 1213, 1214].Neural network models that focus on unsupervised learning rules have also been popular in neuromorphic implementations, beyond the STDP rules implemented in most spiking neural network systems. Hebbian learning mechanisms, of which STDP is one type in spiking neural networks, are common in nonspiking implementations of neuromorphic networks
[1215, 1216, 1217, 1218, 1219, 1220, 1221, 1222, 1223, 1224, 1225, 1226, 1227, 1228, 1229, 1230, 1231]. Selforganizing maps are another form of artificial neural network that utilize unsupervised learning rules, and they have been utilized in neuromorphic implementations
[1160, 1232, 1233, 1234, 1235, 1236, 1237, 759, 1238, 1239, 1240, 1241, 1242, 1243, 1244, 1245, 1246, 1053, 1247, 1248]. More discussion on unsupervised learning methods such as Hebbian learning or STDP is provided in Section IV.The visual system has been a common inspiration for artificial neural network types, including convolutional neural networks. Two other visual systeminspired models, cellular neural networks [1249] and pulsecoupled neural networks [1250], have been utilized in neuromorphic systems. In particular, cellular neural networks were common in early neuromorphic implementations [1251, 1252, 1253, 1254, 1255, 1256, 1257, 1258, 1259, 1260] and have recently seen a resurgence [1261, 1262, 1263, 1264, 1265, 1266, 1267, 1268], whereas pulsecoupled networks were popular in the early 2000’s [1269, 1270, 1271, 1272, 1273, 1274, 1275, 1276, 1277].
Other, less common neural network and neural networkadjacent models implemented in neuromorphic systems include cellular automata [1278, 1279, 1280, 1281, 1282], fuzzy neural networks [1283], which combine fuzzy logic and artificial neural networks, and hierarchical temporal memory [1284], a network model introduced by Hawkins in [1285].
Figure 7 gives an overview of the network models implemented in neuromorphic systems. Figure 8 shows how some of the most frequently used models in neuromorphic implementations have evolved over time. As can be seen in the figures, spiking and feedforward implementations are by far the most common, with spiking implementations seeing a rise in the last decade. General feedforward networks had begun to taper off, but the popularity and success of convolutional neural networks in deep learning has increased in activity in the last five years.
IiiD Summary and Discussion
In terms of model selection for neuromorphic implementations, it is clear that there are a wide variety of options, and much of the ground of potential biological and artificial neural network models has been tread at least once by previous work. The choice of model will be heavily dependent on the intent of the neuromorphic system. With projects whose goal it is to produce useful results for neuroscience, the models usually err on the side of biologicallyplausible or at least biologicallyinspired. For systems that have been moved to hardware for a particular application, such as image processing on a remote sensor or autonomous robots, more artificial neural networklike systems that have proven capabilities in those arenas may be most applicable. It is also the case that the model is chosen or adapted to fit within some particular hardware characteristics (e.g., selecting models that utilize STDP for memristors), or that the model is chosen for efficiency’s sake, as is often the case for eventdriven spiking neural network systems. On the whole, it is clear that most neural network models have, at some point in their history, been implemented in hardware.
Iv Algorithms and Learning
Some of the major open questions for neuromorphic systems revolve around algorithms. The chosen neuron, synapse, and network models have an impact on the algorithm chosen, as certain algorithms are specific to certain network topologies, neuron models, or other network model characteristics. Beyond that, a second issue is whether training or learning for a system should be implemented on chip or if networks should be trained offchip and then transferred to the neuromorphic implementation. A third issue is whether the algorithms should be online and unsupervised (in which case they would necessarily need to be onchip), whether offline, supervised methods are sufficient, or whether a combination of the two should be utilized. One of the key reasons neuromorphic systems are seen as a popular postMoore’s law era complementary architecture is their potential for online learning; however, even the most wellfunded neuromorphic systems struggle to develop algorithms for programming their hardware, either in an offline or online way. In this section, we focus primarily on onchip algorithms, chipintheloop algorithms, and algorithms that are tailored directly for the hardware implementation.
Iva Supervised Learning
The most commonly utilized algorithm for programming neuromorphic systems is backpropagation. Backpropagation is a supervised learning method, and is not typically thought of as an online method. Backpropagation and its many variations can be used to program feedforward neural networks, recurrent neural networks (usually backpropagation through time), spiking neural networks (where often feedforward neural networks are adapted to spiking systems), and convolutional neural networks. The simplest possible approach is to utilize backpropagation offline on a traditional host machine, as there are many available software implementations that have been highly optimized. We omit citing these approaches, as they typically utilize basic backpropagation, and that topic has been covered extensively in the neural network literature
[1286]. However, there are also a large variety of implementations for onchip backpropagation in neuromorphic systems [941, 623, 968, 942, 788, 943, 775, 970, 971, 1032, 872, 628, 629, 973, 974, 975, 630, 944, 633, 877, 945, 778, 641, 642, 643, 644, 646, 647, 650, 789, 790, 1287, 779, 657, 658, 659, 946, 661, 780, 662, 947, 948, 671, 949, 950, 951, 952, 986, 987, 1288, 896, 953, 217, 1289, 1290, 1291, 954, 902, 1052, 793, 1292, 1293, 759, 840, 691, 692, 693, 694, 695, 696, 697, 843, 955, 1089, 956, 795, 957, 1034, 575, 784, 958, 796, 959, 764, 1010, 721, 960, 1294, 961, 962, 798, 963, 964, 726, 727, 728, 1295, 965, 730, 799, 800, 738, 857, 786, 741, 801, 802, 745, 746, 1128, 1022, 803, 1296]. There have been several works that adapt or tailor the backpropagation method to their particular hardware implementation, such as coping with memristive characteristics of synapses [634, 689, 1297, 1298, 1299]. Other gradient descentbased optimization methods have also been implemented on neuromorphic systems for training, and they tend to be variations of backpropagation that have been adapted or simplified in some way [812, 639, 645, 792, 1030, 1300, 1122, 1301, 709, 844, 716, 1302, 718, 719, 1303, 723]. Backpropagation methods have also been developed in chipintheloop training methods [815, 686, 702, 732, 859]; in this case, most of the learning takes place on a host machine or offchip, but the evaluation of the solution network is done on the chip. These methods can help to take into account some of the device’s characteristics, such as component variation.There are a variety of issues associated with backpropagation, including that it is relatively restrictive on the type of neuron models, networks models, and network topologies that can be utilized in an efficient way. It can also be difficult or costly to implement in hardware. Other approaches for onchip supervised weight training have been utilized. These approaches include the leastmeansquares algorithm [750, 1025, 1026, 787], weight perturbation [625, 19, 1078, 1079, 1080, 655, 669, 682, 834, 835, 1098, 698, 699, 841, 1304, 1099, 708, 845, 846, 847, 710, 712, 713, 715, 856, 736, 1148], training specifically for convolutional neural networks [1305, 1306] and others [1307, 1308, 1309, 1310, 1311, 1029, 864, 865, 169, 1312, 804, 1313, 1314, 220, 714, 1315, 1316, 1317, 1318, 465, 1319, 1320, 1049]. Other onchip supervised learning mechanisms are built for particular model types, such as Boltzmann machines, restricted Boltzmann machines, or deep belief networks [627, 1193, 1135, 12, 1196, 1201, 1202, 1207] and hierarchical temporal memory [1284].
A set of naturebased or evolutioninspired algorithms have also been also been implemented for hardware. These implementations are popular because they do not rely on particular characteristics of a model to be utilized, and offchip methods can easily utilize the hardware implementations in the loop. They can also be used to optimize within the characteristics and peculiarities of a particular hardware implementation (or even the characteristics and peculiarities of a particular hardware device instance). Offchip naturebased implementations include differential evolution [1321, 1322, 1323, 1324]
, evolutionary or genetic algorithms
[1076, 1325, 1326, 1327, 1328, 512, 1082, 1083, 1084, 1085, 680, 1329, 1330, 1331, 1332, 1333, 1334, 700, 484, 1335, 485, 486, 487, 1092, 1336, 1337, 570], and particle swarm optimization
[1338]. We explicitly specify these offchip methods because all of the naturebased implementations rely on evaluations of a current network solution and can utilize the chip during the training process (as a chipintheloop method). There have also been a variety of implementations that include the training mechanisms on the hardware itself or in a companion hardware implementation, including both evolutionary/genetic algorithms [622, 626, 1339, 1340, 1341, 524, 1342, 1343, 1344, 1345, 1346, 1347, 1348, 1349, 554, 555, 556, 560, 1350] and particle swarm optimization [1351, 1352, 1353, 1354].Algorithm Class 



OnLine 





BackPropagation  No  No  Yes  No  Yes  Yes  No  
Evolutionary  Yes  Yes  No  No  No  Yes  Maybe  
Hebbian  No  Yes  No  Yes  Maybe  No  Yes  
STDP  No  Yes  Maybe  Yes  Maybe  No  Yes 
IvB Unsupervised Learning
There have been several implementations of onchip, online, unsupervised training mechanisms in neuromorphic systems. These selflearning training algorithms will almost certainly be necessary to realize the full potential of neuromorphic implementations. Some early neuromorphic implementations of unsupervised learning were based on selforganizing maps or selforganizing learning rules [1197, 1198, 1245, 759, 1244, 1233, 1273, 1274, 1234, 1271, 1053, 1247, 1241, 1235, 1022], though there have been a few implementations in more recent years [1237, 1242, 1248, 1272]. Hebbiantype learning rules, which encompass a broad variety of rules, have been very popular as online mechanisms for neuromorphic systems, and there are variations that encompass both supervised and unsupervised learning [323, 1225, 576, 573, 496, 594, 595, 596, 1226, 1227, 1215, 642, 1143, 1221, 660, 1149, 426, 427, 598, 599, 334, 580, 600, 1144, 341, 601, 607, 342, 1355, 612, 1356, 1151, 1357, 1228, 1181, 1222, 1114, 1166, 1152, 1153, 1154, 355, 1157, 1358, 1184, 918, 362, 1053, 1359, 366, 1360, 1361, 1217, 1218, 1195, 1219, 1362, 1146, 602, 800, 1363, 1229, 1126, 1230, 385, 1223, 1231]. Finally, perhaps the most popular online, unsupervised learning mechanism in neuromorphic systems is spiketiming dependent plasticity [1364, 1365, 1366, 406, 388, 1367, 407, 410, 411, 1368, 1369, 497, 1307, 1370, 1371, 571, 412, 1372, 414, 1373, 1374, 1375, 502, 1376, 1377, 1378, 1379, 507, 328, 578, 329, 1380, 254, 1381, 423, 1382, 1383, 1384, 1385, 1386, 1387, 1388, 425, 579, 1389, 1390, 1064, 516, 597, 518, 519, 340, 1391, 1330, 1331, 1332, 1333, 1392, 393, 1393, 603, 1394, 1395, 343, 432, 347, 435, 436, 348, 1396, 437, 586, 1397, 1398, 1399, 1400, 1401, 1402, 352, 353, 396, 1403, 1404, 1405, 1406, 838, 1407, 1408, 1409, 442, 1410, 444, 1411, 357, 446, 447, 448, 449, 1412, 452, 1413, 1414, 1415, 358, 397, 398, 456, 1416, 1417, 1418, 1419, 1420, 1421, 1422, 458, 459, 1423, 1424, 1425, 1426, 460, 1427, 1428, 365, 461, 399, 549, 1169, 368, 1429, 552, 369, 462, 370, 371, 21, 463, 1430, 402, 1036, 1431, 1432, 1433, 1434, 1435, 1436, 1437, 558, 1438, 1439, 375, 376, 377, 380, 561, 473, 474, 475, 1440, 1441, 477, 478, 1442, 1443, 1444, 384, 1445, 1446, 1447, 566, 1448, 1449, 386, 1450, 1451, 1452, 1453], which is a form of Hebbianlike learning that has been observed in real biological systems [1454]. The rule for STDP is generally that if a presynaptic neuron fires shortly before (after) the postsynaptic neuron, the synapse’s weight will be increased (decreased) and the less time between the fires, the higher the magnitude of the change. There are also custom circuits for depression [409, 1455, 351, 1456, 440, 356] and potentiation [1457] in synapses in more biologicallyinspired implementations. It is worth noting that, especially for STDP, wide applicability to a set of applications has not been fully demonstrated.
IvC Summary and Discussion
Spiking neural networkbased neuromorphic systems have been popular for several reasons, including the power and/or energy efficiency of their eventdriven computation and their closer biological inspiration relative to artificial neural networks in general. Though there have been proposed methods for training spiking neural networks that usually utilize STDP learning rules for synaptic weight updates, we believe that the full capabilities of spiking neuromorphic systems have not yet been realized by training and learning mechanisms. As noted in Section IIIC, spiking neuromorphic systems have been frequently utilized for nonspiking network models. These models are attractive because we typically know how to train them and how to best utilize them, which gives a set of applications for spiking neuromorphic systems. However, we cannot rely on these existing models to realize the full potential of neuromorphic systems. As such, the neuromorphic computing community needs to develop algorithms for spiking neural network systems that can fully realize the characteristics and capabilities of those systems. This will require a paradigm shift in the way we think about training and learning. In particular, we need to understand how to best utilize the hardware itself in training and learning, as neuromorphic hardware systems will likely allow us to explore largerscale spiking neural networks in a more computationally and resource efficient way than is possible on traditional von Neumann architectures.
An overview of onchip learning algorithms is given in Figure 9. When choosing the appropriate algorithm for a neuromorphic implementation, one must consider several factors: (1) the chosen model, (2) the chosen material or device type, (3) whether learning should be onchip, (4) whether learning should be online, (5) how fast learning or training needs to take place, (6) how successful or broadly applicable the results will be, and (7) whether the learning should be biologicallyinspired or biologicallyplausible. Some of these factors for various algorithms are considered in Table I. For example, backpropagation is a tried and true algorithm, has been applied to a wide variety of applications and can be relatively fast to converge to a solution. However, if a device is particularly restrictive (e.g., in terms of connectivity or weight resolution) or has a variety of other quirks, then backpropagation requires significant adaptation to work correctly and may take significantly longer to converge. Backpropagation is also very restrictive in terms of the types of models on which it can operate. We contrast backpropagation with evolutionarybased methods, which can work with a variety of models, devices, and applications. Evolutionary methods can also be relatively easier to implement than more analytic approaches for different neuromorphic systems. However, they can be slow to converge for complex models or applications. Additionally, both backpropagation and evolutionary methods require feedback, i.e., they are supervised algorithms. Both Hebbian learning and STDP methods can be either supervised or unsupervised; they are also biologicallyinspired and biologicallyplausible, making them attractive to developers who are building biologicallyinspired devices. The downside to choosing Hebbian learning or STDP is that they have not been demonstrated to be widely applicable.
There is still a significant amount of work to be done within the field of algorithms for neuromorphic systems. As can be seen in Figure 8, spiking network models are on the rise. Currently, STDP is the most commonly used algorithm proposed for training spiking systems, and many spiking systems in the literature do not specify a learning or training rule at all. It is worth noting that algorithms such as backpropagation and the associated network models were developed with the von Neumann architecture in mind. Moving forward for neuromorphic systems, algorithm developers need to take into account the devices themselves and have an understanding of how these devices can be utilized most effectively for both learning and training. Moreover, algorithm developers need to work with hardware developers to discover what can be done to integrate training and learning directly into future neuromorphic devices, and to work with neuroscientists in understanding how learning is accomplished in biological systems.
V Hardware
Here we divide hardware implementations of neuromorphic implementations into three major categories: digital, analog, and mixed analog/digital platforms. These are examined at a highlevel with some of the more exotic devicelevel components utilized in neuromorphic systems explored in greater depth. For the purposes of this survey, we maintain a highlevel view of the neuromorphic system hardware considered.
Va HighLevel
There have been many proposed taxonomies for neuromorphic hardware systems [1458], but most of those taxonomies divide the hardware systems at a highlevel into analog, digital or mixed analog/digital implementations. Before diving into the neuromorphic systems themselves, it is worthwhile to note the major characteristics of analog and digital systems and how they relate to neuromorphic systems. Analog systems utilize native physical characteristics of electronic devices as part of the computation of the system, while digital systems tend to rely on Boolean logicbased gates, such as AND, OR, and NOT, for building computation. The biological brain is an analog system and relies on physical properties for computation and not on Boolean logic. Many of the computations in neuromorphic hardware lend themselves to the sorts of operations that analog systems naturally perform. Digital systems rely on discrete values while analog systems deal with continuous values. Digital systems are usually (but not always) synchronous or clockbased, while analog systems are usually (but not always) asynchronous; in neuromorphic, however, this rule of thumb is often not true, as even the digital systems tend to be eventdriven and analog systems sometimes employ clocks for synchronization. Analog systems tend to be significantly more noisy than digital systems; however, there have been some arguments that because neural networks can be robust to noise and faults, they may be ideal candidates for analog implementation [1459]. Figure 10 gives an overall summary breakdown of highlevel breakdown of different neuromorphic hardware implementations.
VA1 Digital
Two broad categories of digital systems are addressed here. The first is field programmable gate arrays or FPGAs. FPGAs have been frequently utilized in neuromorphic systems [1179, 1180, 1181, 1053, 1182, 1460, 1461, 1462, 1463, 1464, 1465, 1466, 1467, 1468, 1469, 1470, 1471, 1472, 1473, 1474, 1475, 1476, 1477, 1478, 1479, 1480, 1481, 1482, 1483, 1484, 1485, 1486, 1487, 1488, 1489, 1490, 1491, 1492, 1493, 1494, 1495, 1496, 1497, 1498, 1499, 617, 618, 619, 1279, 1280, 1281, 1282, 1267, 1268, 1067, 1068, 1069, 1070, 1071, 1072, 1073, 1074, 1075, 1213, 1214, 1029, 1500, 1501, 489, 490, 491, 1502, 1503, 1504, 1505, 1506, 1507, 1508, 1509, 1510, 192, 194, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 200, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 206, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 1511, 1139, 1141, 1142, 1155, 1143, 1144, 1156, 1157, 1145, 1158, 1159, 1146, 1147, 1148, 1104, 1105, 1106, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 530, 991, 202, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 207, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1032, 1033, 1034, 1512, 1513, 1514, 1515, 1351, 1352, 1339, 1340, 1341, 1354, 1342, 1343, 1344, 1345, 1346, 1347, 1348, 1350, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1287, 805, 806, 217, 1289, 1293, 1516, 1517, 1304, 1518, 1360, 1361, 1277, 1049, 1050, 1051, 1052, 1095, 1096, 1097, 1098, 1099, 1209, 1210, 1211, 1245, 1246, 1247, 1248, 492, 493, 494, 605, 495, 573, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 481, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 86, 523, 524, 525, 169, 526, 527, 528, 529, 531, 532, 533, 534, 535, 536, 537, 538, 539, 575, 540, 541, 542, 543, 544, 545, 546, 1519, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 587, 588, 1187, 1188, 1189, 1190, 1191, 1192, 1520, 1521, 1522, 1523, 1524, 1525]
. For many of these implementations, the use of the FPGA is often utilized as a stopgap solution on the way to a custom chip implementation. In this case, the programmability of the FPGA is not utilized as part of the neuromorphic implementation; it is simply utilized to program the device as a neuromorphic system that is then evaluated. However, it is also frequently the case that the FPGA is utilized as the final implementation, and in this case, the programmability of the device can be leveraged to realize radically different network topologies, models, and algorithms. Because of their relative ubiquity, most researchers have access to at least one FPGA and can work with languages such as VHDL or Verilog (hardware description languages) to implement circuits in FPGAs. If the goal of developing a neuromorphic system is to achieve speedup over software simulations, then FPGAs can be a great choice. However, if the goal is to achieve a small, lowpower system, then FPGAs are probably not the correct approach. Liu and Wang point out several advantages of FPGAs over both digital and analog ASIC implementations, including shorter design and fabrication time, reconfigurability and reusability for different applications, optimization for each problem, and easy interface with a host computer
[1526].Full custom or application specific integrated circuit (ASIC) chips have also been very common for neuromorphic implementations [1170, 1527, 1410, 1528, 1529, 1530, 1204, 1205, 1260, 749, 750, 751, 752, 753, 754, 755, 756, 683, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 1283, 1531, 1532, 1533, 1534, 1535, 1536, 1537, 1118, 1119, 1120, 6, 1121, 1122, 1123, 1124, 764, 1125, 1126, 1127, 1128, 1538, 805, 806, 807, 808, 1539, 1540, 1129, 1130, 1363, 1047, 1048, 1541, 1542, 1543, 1100, 1101, 1236, 1237, 1238, 1239, 1240, 1241, 1242, 389, 390, 1212, 391, 392, 393, 394, 348, 350, 395, 396, 397, 398, 399, 400, 401, 402, 403, 1275, 477, 404, 1183, 1184, 1185, 1544, 1545, 1546, 1547, 1548, 1549, 1550, 1551, 1552, 1553, 1554, 1555, 1556, 1557, 1558, 1559, 1560, 1054, 1055, 1056, 1057, 1058, 1059, 1060, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 773, 774, 775, 776, 777, 778, 9, 779, 780, 781, 677, 782, 783, 784, 785, 786, 787, 804, 438, 1561]. IBM’s TrueNorth, one of the most popular presentday neuromorphic implementations, is a full custom ASIC design [1562, 1563, 1564, 1565, 1566, 1567, 1568, 1569]. The TrueNorth chip is partially asynchronous and partially synchronous, in that some activity does not occur with the clock, but the clock governs the basic time step in the system. A core in the TrueNorth system contains a 256x256 crossbar configuration that maps incoming spikes to neurons. The behavior of the system is deterministic, but there is the ability to generate stochastic behavior through pseudorandom source. This stochasticity can be exactly replicated in a software simulation.
SpiNNaker, another popular neuromorphic implementation, is also a full custom digital, massively parallel system [1570, 1571, 1572, 1573, 1574, 1575, 1576, 1577, 1578, 1579, 1580, 1581, 1582, 1583, 1584, 1585, 1586, 1587, 1588, 1589, 1590, 1591, 1592, 1593]. SpiNNaker is composed of many small integer cores and a custom interconnect communication scheme which is optimized for the communication behavior of a spikebased network architecture. That is, the communication fabric is meant to handle a large number of very small messages (spikes). The processing unit itself is very flexible and not custom for neuromorphic, but the configuration of each SpiNNaker chip includes instruction and data memory in order to minimize access time for frequently used data. Like TrueNorth, SpiNNaker supports the cascading of chips to form larger systems.
TrueNorth and SpiNNaker provide good examples of the extremes one can take with digital hardware implementations. TrueNorth has chosen a fixed spiking neural network model with leaky integrateandfire neurons and limited programmable connectivity, and there is no onchip learning. It is highly optimized for the chosen model and topology of the network. SpiNNaker, on the other hand, is extremely flexible in its chosen neuron model, synapse model, and learning algorithm. All of those features and the topology of the network are extremely flexible. However, this flexibility comes at a cost in terms of energy efficiency. As reported by Furber in [1594], TrueNorth consumes 25 pJ per connection, whereas SpiNNaker consumes 10 nJ per connection.
VA2 Analog
Similar to the breakdown of digital systems, we separate analog systems into programmable and custom chip implementations. As there are FPGAs for digital systems, there are also field programmable analog arrays (FPAAs). For many of the same reasons that FPGAs have been utilized for digital neuromorphic implementations, FPAAs have also been utilized [481, 869, 483, 484, 1595, 485, 486, 487, 604, 1028, 488]. There have also been custom FPAAs specifically developed for neuromorphic systems, including the field programmable neural array (FPNA) [482] and the NeuroFPAA [870]. These circuits contain programmable components for neurons, synapses, and other components, rather than being more general FPAAs for general analog circuit design.
It has been pointed out that custom analog integrated circuits and neuromorphic systems have several characteristics that make them well suited for one another. In particular, factors such as conservation of charge, amplification, thresholding and integration are all characteristics that are present in both analog circuitry and biological systems [1]. In fact, the original term neuromorphic was used to refer to analog designs. Moreover, taking inspiration from biological neural systems and how they operate, neuromorphic based implementations have the potential to overcome some of the issues associated with analog circuits that have prevented them from being widely accepted. Some of these issues are dealing with global asynchrony and noisy, unreliable components [1459]. For both of these cases, systems such as spiking neural networks are natural applications for analog circuitry because they can operate asynchronously and can deal with noise and unreliability.
One of the common approaches for analog neuromorphic systems is to utilize circuitry that operates in subthreshold mode, typically for power efficiency purposes [1596, 1160, 1161, 1597, 1598, 1599, 576, 1600, 325, 1162, 1163, 577, 1079, 1080, 641, 1601, 1602, 330, 647, 654, 1603, 665, 101, 1084, 1604, 1605, 334, 1606, 1394, 1607, 1608, 684, 1609, 1610, 1611, 1612, 700, 704, 1613, 704, 1274, 1411, 1089, 1614, 1615, 1616, 720, 1617, 721, 373, 1045, 727, 737, 1618, 1619]. In fact, the original neuromorphic definition by Carver Mead referred to analog circuits that operated in subthreshold mode [1]. There are a large variety of other neuromorphic analog implementations [1164, 1165, 1166, 1167, 1168, 1620, 1169, 1621, 1622, 1623, 1624, 99, 100, 1324, 1395, 1625, 1455, 1626, 1627, 1313, 1628, 1629, 1630, 1631, 1419, 1632, 1633, 1359, 1634, 1635, 1636, 1637, 1638, 1639, 1640, 1429, 1641, 1642, 1457, 1643, 1644, 1645, 1646, 1647, 1648, 1649, 1650, 1651, 1652, 1653, 1654, 1655, 1656, 1193, 12, 1194, 1195, 1196, 1251, 1252, 1253, 1254, 1255, 1256, 1257, 1258, 1259, 1657, 1658, 620, 17, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 8, 631, 632, 633, 634, 635, 636, 637, 19, 638, 639, 640, 642, 643, 644, 645, 646, 648, 649, 650, 651, 652, 653, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 685, 18, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 701, 702, 703, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 722, 723, 724, 725, 726, 728, 729, 730, 731, 732, 733, 734, 735, 736, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 1659, 1660, 1661, 1662, 1663, 1664, 1215, 1216, 1217, 1218, 1219, 1220, 1110, 1111, 1112, 1113, 1114, 1115, 1116, 1117, 1035, 1665, 1320, 1044, 1046, 747, 614, 1076, 1077, 1078, 1081, 1082, 1083, 1085, 1086, 1087, 1088, 1090, 7, 1091, 1092, 1093, 1232, 1233, 1234, 1235, 323, 324, 326, 594, 595, 571, 596, 327, 328, 578, 329, 606, 331, 332, 579, 516, 333, 597, 598, 599, 580, 572, 335, 336, 337, 338, 339, 600, 340, 341, 601, 607, 342, 612, 613, 603, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 608, 355, 356, 357, 358, 359, 609, 360, 610, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 374, 602, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 1030, 1031, 1666, 1667, 1668, 1669, 1670, 1671, 1672, 1673, 1674, 1675, 1676, 1677, 1678, 14, 1679, 1680, 1681, 1682, 1683, 1684, 1685, 1686, 1687, 1688, 1689, 1690, 1691, 1692, 1693, 1694, 1695, 1696, 1697, 1698, 1699, 1700, 1701, 1702, 1703, 1704, 1705, 1706, 1707, 1708, 1709, 1597, 1710, 1711, 1712, 1713, 1714, 1715, 1716, 1717, 1718, 1719, 1269, 1270, 1273, 1271, 1272, 1720].
VA3 Mixed Analog/Digital
Mixed analog/digital systems are also very common for neuromorphic systems [809, 1367, 408, 409, 810, 812, 1132, 818, 1721, 428, 1722, 430, 431, 1723, 585, 1173, 432, 830, 833, 1061, 1724, 1062, 15, 1135, 839, 440, 840, 1203, 841, 1244, 1136, 843, 441, 442, 443, 444, 1725, 1726, 450, 451, 848, 5, 456, 1094, 1727, 457, 850, 1728, 1729, 1102, 459, 460, 461, 1174, 1175, 852, 853, 1199, 462, 21, 463, 1434, 464, 1730, 465, 466, 593, 467, 16, 1731, 1732, 1733, 1734, 1735, 1130, 1736, 1737, 1738, 1739, 1740, 471, 1741, 473, 477, 478, 858, 1444, 1742, 1176, 479, 860, 1743, 1744, 1745]. Because of its natural similarity to biological systems, analog circuitry has been commonly utilized in mixed analog/digital neuromorphic systems to implement the processing components of neurons and synapses. However, there are several issues with analog systems that can be overcome by utilizing digital components, including unreliability.
In some neuromorphic systems, it has been the case that synapse weight values or some component of the memory of the system are stored using digital components, which can be less noisy and more reliable than analogbased memory components [405, 1131, 410, 411, 413, 814, 815, 816, 422, 1746, 817, 1747, 1748, 819, 820, 821, 821, 823, 824, 1749, 1750, 1751, 1752, 1186, 1243, 828, 425, 1753, 1107, 834, 835, 1754, 837, 1755, 1202, 842, 445, 844, 845, 846, 1137, 1756, 1757, 849, 851, 458, 1758, 1759, 856, 1760, 1138, 857, 859, 1223, 861]. For example, synaptic weights are frequently stored in digital memory for analog neuromorphic systems. Other neuromorphic platforms are primarily analog, but utilize digital communication, either within the chip itself, to and from the chip, or between neuromorphic chips [133, 406, 413, 1761, 415, 417, 1762, 418, 419, 420, 421, 1221, 1763, 1764, 1765, 1766, 615, 616, 1767, 1276, 826, 827, 1171, 429, 1768, 1769, 1770, 1134, 434, 435, 436, 437, 586, 447, 448, 449, 452, 1771, 13, 453, 1108, 1109, 454, 1772, 1773, 455, 854, 855, 1774, 468, 469, 470, 472, 1775, 480, 1198, 1776, 1024, 412, 813, 1372, 424, 1172, 829, 1133, 446, 1452, 1453]. Communication within and between neuromorphic chips is usually in the form of digital spikes for these implementations. Using digital components for programmability or learning mechanisms has also been common in mixed analog/digital systems [1777, 1261, 407, 412, 1201, 1201, 416, 1778, 825, 831, 832, 1779, 1780, 439, 1197, 1781, 423, 433, 1318].
Two major projects within the mixed analog/digital family are Neurogrid and BrainScaleS. Neurogrid is a primarily analog chip that is probably closest in spirit to the original definition of neuromorphic as coined by Mead [1782, 1783, 1784, 1785]. Both of these implementation fall within the mixed analog/digital family because of their digital communication framework. BrainScaleS is a waferscale implementation that has analog components [1770, 444, 1631, 1786, 459, 363, 372]. Neurogrid operates in subthreshold mode, and BrainScaleS operates in superthreshold mode. The developers of BrainScaleS chose superthreshold mode because it allows BrainScaleS chips to operate at a much higher rate than is possible with Neurogrid, achieving a 10,000x speedup [1594].
VB DeviceLevel Components
In this section, we cover some of the nonstandard device level or circuit level components that are being utilized in neuromorphic systems. These include a variety of components that have traditionally been used as memory technologies, but they also include elements such as optical components. Figure 11 gives an overview of the devicelevel components and also shows their relative popularity in the neuromorphic literature.
VB1 Memristors
Perhaps the most ubiquitous devicelevel component in neuromorphic systems is the “memory resistor” or the memristor. Memristors were a theoretical circuit element proposed by Leon Chua is 1971 [1787] and “found” by researchers at HP in 2008 [1788]. The key characteristic of memristive devices is that the resistance value of the memristor is dependent upon its historical activity. One of the major reasons that memristors have become popular in neuromorphic computing is their relationship to synapses; namely, circuits that incorporate memristors can exhibit STDPlike behavior that is very similar to what occurs in biological synapses. In fact, it has been proposed that biological STDP can be explained by memristance [1789]. Memristors can be and have been made from a large variety of materials, some of which will be discussed in Section VC, and these different materials can exhibit radically different characteristics. Another reason for utilizing memristors in neuromorphic systems is their potential for building energy efficient circuitry, and this has been studied extensively, with several works focused entirely on evaluating energy consumption of memristive circuitry in neuromorphic systems [1790, 1791, 1792, 1793, 1794, 1795, 1796, 1797]. It has also been observed that neuromorphic implementations are a good fit for memristors because the inherent fault tolerance of neural network models can help mitigate effects caused by memristor device variation [1798, 1799, 1800, 1801, 1802].
A common use of memristors in neuromorphic implementations is as part of or the entire synapse implementation (depending on the type of network) [1177, 1178, 1803, 1804, 1805, 1806, 1200, 1206, 1207, 1208, 1262, 1263, 1264, 1356, 1807, 1063, 1064, 1065, 1025, 1026, 1027, 862, 863, 864, 865, 866, 867, 868, 1284, 1149, 1150, 1151, 1152, 1153, 1154, 1808, 1809, 1357, 1810, 1036, 1103, 1811, 1812, 1813, 1814, 1815, 1816, 1817, 1818, 1456, 1819, 1820]. Sometimes the memristor is simply used as a synaptic weight storage element. In other cases, because of their plasticitylike properties, memristors have been used to implement synaptic systems that include Hebbian learning in general [1224, 1225, 1226, 1227, 1228, 1229, 1230, 1231] or STDP in particular [1364, 1365, 1369, 1370, 1371, 1373, 1374, 1375, 1376, 1377, 254, 1381, 1382, 1383, 1384, 1385, 1386, 1387, 1388, 1389, 1391, 1392, 1393, 1397, 1398, 1399, 1400, 1403, 1404, 1405, 1409, 1412, 1413, 1416, 1420, 1421, 1422, 1424, 1430, 1431, 1432, 1436, 1437, 1438, 1441, 1442, 1443, 1445, 1450, 1451]. Perhaps the most common use of a memristor in neuromorphic systems is to build a memristor crossbar to represent the synapses in the network [1821, 1366, 1822, 1368, 1823, 1307, 1308, 1824, 1309, 1310, 1311, 1825, 1826, 1379, 1827, 1828, 1380, 1288, 1829, 1830, 1312, 1831, 1832, 1833, 1834, 1401, 1402, 1835, 1301, 1836, 1837, 1838, 1406, 1407, 1839, 1840, 1841, 1842, 1843, 1844, 1845, 1846, 1847, 1848, 1358, 1414, 1302, 1415, 1417, 1849, 1418, 1850, 1851, 1294, 1423, 1425, 1426, 1316, 1317, 1428, 1852, 1853, 1854, 1435, 1855, 1856, 1319, 1857, 1858, 1859, 1860, 1861, 1512, 1513, 475, 476, 1440, 478, 1862, 1863, 1864, 1446, 1447, 1448, 1865, 1866, 1867, 1868, 1869, 1870, 1871, 1872, 1873, 1874, 1875, 1876, 1877, 1878, 1449, 1879, 1880, 1881, 1882, 1296]. Early physical implementations of memristors have been in the crossbar configuration. Crossbar realizations are popular in the literature mainly due to their density advantage by also because physical crossbars have been fabricated and shown to perform well. Because a single memristor cannot represent positive and negative weight values for a synapse (which may be required over the course of training for that synapse), multimemristor synapses have been proposed, including memristorbridge synapses, which can realize positive, negative and zero weight values [1883, 1884, 1885, 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893]. Memristorbased synapse implementations that include forgetting effects have also been studied [1894, 1895]. Because of their relative ubiquity in neuromorphic systems, a set of training algorithms have been developed specifically with characteristics of memristive systems in mind, such as dealing with nonideal device characteristics [1896, 1378, 1897, 1898, 1899, 1386, 1388, 1329, 1330, 1331, 1332, 1333, 1314, 220, 1900, 1427, 1901, 1303, 1433, 1362, 1902, 1295, 1903, 1904, 1905].
Memristors have also been utilized in neuron implementations [1906, 1907, 1908, 1909, 1910, 1416, 1911, 1912, 1913]. For example, memristive circuits have been used to generate complex spiking behavior [1914, 1915, 1916]. In another case, a memristor has been utilized to add stochasticity to an implemented neuron model [1208]. Memristors have also been used to implement HodgkinHuxley axons in hardware as part of a neuron implementation [1917, 1918].
It is worth noting that there are a variety of issues associated with using memristors for neuromorphic implementations. These include issues with memristor behavior that can seriously affect the performance of STDP [1919, 1920, 1921], sneak paths [1922], and geometry variations [1923]. It is also worth noting that a fair amount of theory about memristive neural networks has been established, including stabilization [1924, 1925, 1926, 1927, 1928, 1929, 1930, 1931, 1932, 1933, 1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943, 1944, 1945, 1946, 1947, 1948, 1949, 1950, 1951, 1952, 1953, 1954, 1955, 1956, 1957, 1958, 1959, 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1970, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981, 1982], synchronization [1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1927, 1930, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 1947, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2030, 2031, 2032, 2033, 2034, 2035, 2036, 1974, 2037, 2038, 2039, 1976, 2040, 2041, 2042, 2043, 2044, 2045, 1981, 2046], and passivity [2047, 2048, 2049, 2050, 2051, 2052, 2053, 2054, 2055, 2056, 2057, 2058, 2059, 2060, 2061, 2062, 2063] criteria for memristive neural networks. However, these works are typically done with respect to ideal memristor models and may not be realistic in fabricated systems.
VB2 CBRAM and Atomic Switches
Conductivebridging RAM (CBRAM) has also been utilized in neuromorphic systems. Similar to resistive RAM (ReRAM), which is implemented using metaloxide based memristors or memristive materials, CBRAM is a nonvolatile memory technology. CBRAM has been used to implement synapses [2064, 2065, 2066, 1384, 2067, 1390, 1408, 2068, 299, 1439, 1027, 2069] and neurons [2070, 457]. CBRAM differs from resistive RAM in that it utilizes electrochemical properties to form and dissolve connections. CBRAM is fast, nanoscale, and has very low power consumption [2064]. Similarly, atomic switches, which are nanodevices related to resistive memory or memristors, control the diffusion of metal ions to create and destroy an atomic bridge between two electrodes [2071]. Atomic switches have been fabricated for neuromorphic systems. Atomic switches are typically utilized to implement synapses and have been shown to implement synaptic plasticity in a similar way to approaches with memristors [2072, 2073, 2074, 2075, 2076, 2077, 2078, 2079].
VB3 Phase Change Memory
Phase change memory elements have been utilized in neuromorphic systems, usually to achieve high density. Phase change memory elements have commonly been used to realize synapses that can exhibit STDP. Phase change memory elements are usually utilized for synapse implementations [2080, 2081, 2082, 2083, 2084, 2085, 2086, 2087, 261, 2088, 2089, 2090, 2091, 2092, 2093, 2094, 2095, 2096, 2097, 2098, 2099] or synapse weight storage [2100, 2101, 2102, 2103], but they have also been used to implement both neurons and synapses [2104, 2105, 2106].
VB4 Spin Devices
One of the proposed beyondCMOS technologies for neuromorphic computing is spintronics (i.e., magnetic devices). Spintronic devices and components have been considered for neuromorphic implementation because they allow for a variety of tunable functionalities, are compatible with CMOS, and can be implemented at nanoscale for high density. The types of spintronic devices utilized in neuromorphic systems include spintransfer torque devices, spinwave devices, and magnetic domain walls [2107, 2108, 2109, 2110, 2111]. Spintronic devices have been used to implement neurons [2112, 2113, 1880, 2114, 1853, 2115, 2116, 2117, 2118], synapses that usually incorporate a form of online learning such as STDP [2119, 2120, 2121, 2122, 2123, 2124, 2125, 2126], and full networks or network modules [2127, 2128, 2129, 2130, 2131, 2132, 2133, 2134, 2135, 2136, 2137, 2138, 2139, 2140, 2141, 2142, 2143, 2144].
VB5 Floating Gate Transistors
Floatinggate transistors, commonly used in digital storage elements such as flash memory [2145], have been utilized frequently in neuromorphic systems. As Aunet and Hartmann note, floatinggate transistors can be utilized as analog amplifiers, and can be used in analog, digital, or mixedsignal circuits for neuromorphic implementation [2146]. The most frequent uses for floatinggate transistors in neuromorphic systems have been either as analog memory cells for synaptic weight and/or parameter storage [2147, 2148, 2149, 2150, 2151, 2152, 2153, 2154, 2155] or as a synapse implementation that usually includes a learning mechanism such as STDP [2156, 2157, 2158, 2159, 2160, 2161, 2162, 281, 2163, 2164, 285, 2165, 2166, 259, 2167, 2168]. However, floating gate transistors have also been used to implement a linear threshold element that could be utilized for neurons [2146], a full neuron implementation [2169], dendrite models [2170]
, and to estimate firing rates of silicon neurons
[2171].VB6 Optical
Optical implementations and implementations that include optical or photonic components are popular for neuromorphic implementations [2172, 2173, 2174, 2175, 2176, 2177, 2178, 2179, 2180, 2181]. In the early days of neuromorphic computing, optical implementations were considered because they are inherently parallel, but it was also noted that the implementation of storage can be difficult in optical systems [2182], so their implementations became less popular for several decades. In more recent years, optical implementations and photonic platforms have reemerged because of their potential for ultrafast operation, relatively moderate complexity and programmability [2183, 2184]. Over the course of development of neuromorphic systems, optical and/or photonic components have been utilized to build different components within neuromorphic implementations. Optical neuromorphic implementations include optical or optoelectronic synapse implementations in early neuromorphic implementations [2185, 2186] and more recent optical synapses, including using novel materials [2187, 2188, 2189, 2190, 2191]. There have been several proposed optical or photonic neuron implementations in recent years [2192, 2193, 150, 2194, 2195, 2196, 2197].
VC Materials for Neuromorphic Systems
One of the key areas of development in neuromorphic computing in recent years have been in the fabrication and characterization of materials for neuromorphic systems. Though we are primarily focused on the computing and system components of neuromorphic computing, we also want to emphasize the variety of new materials and nanoscale devices being fabricated and characterized for neuromorphic systems by the materials science community.
Atomic switches and CBRAM are two of the common nanoscale devices that have been fabricated with different materials that can produce different behaviors. A review of different types of atomic switches for neuromorphic systems is given in [2078], but common materials for atomic switches are Ag_2S [2072, 2073, 2076], Cu_2S [2074], Ta_2O_5 [2077], and WO_3x [2079]. Different materials for atomic switches can exhibit different switching behavior under different conditions. As such, the selection of the appropriate material can govern how the atomic switch will behave and will likely be applicationdependent. CBRAM has been implemented using GeS_2/Ag [1384, 299, 1439, 1027, 2066, 457, 2068], HfO_2/GeS_2 [2067], Cu/Ti/Al_2O_3 [2070], Ag/Ge_0.3Se_0.7 [2198, 1408, 2069], Ag_2S [2199, 2200, 2201] and Cu/SiO_2 [2069]. Similar to atomic switches, the switching behavior of CBRAM devices is also dependent upon the material selected; the stability and reliability of the device is also dependent upon the material chosen.
There are a large variety of implementations of memristors. Perhaps the most popular memristor implementations are based on transition metaloxides (TMOs). For metaloxide memristors, a large variety of different materials are used, including HfO_x [2202, 2203, 2204, 2205, 2206, 2207, 2208, 2209, 2210], TiO_x [2211, 2212, 2213, 2214, 2215, 2216], WO_x [2217, 2218, 2219, 2220, 2221], SiO_x [2222, 2223], TaO_x/TiO_x [2224, 2225], NiO_x [2226, 2227, 2228], TaO_x [2229, 2230, 2231], FeO_x [2232], AlO_x [2233, 2234], TaO_x/TiO_x [2224, 2225], HfO_x/ZnO_x [2235], and PCMO [2236, 2237, 2238, 2239, 2240, 2241] . Different metal oxide memristor types can produce different numbers and types of resistance states, which govern the weight values that can be “stored” on the memristor. They also have different endurance, stability, and reliability characteristics.
A variety of other materials for memristors have also been proposed. For example, spinbased magnetic tunnel junction memristors based on MgO have been proposed for implementations of both neurons and synapses [2242], though it has been noted that they have a limited range of resistance levels that make them less applicable to store synaptic weights [2231]. Chalcogenide memristors [2243, 2244, 2245] have also been used to implement synapses; one of the reasons given for utilizing chalcogenidebased memristors is ultrafast switching speeds, which allow for processes like STDP to take place at nanosecond scale [2243]. Polymerbased memristors have been utilized because of their low cost and tunable performance [2246, 2211, 2247, 2248, 2249, 2250, 2251, 2252, 2253, 2254]. Organic memristors (which include organic polymers) have also been proposed [2255, 2256, 2257, 2211, 2258, 2259, 2260, 2261, 2262, 2263, 2264, 2265, 2266].
Ferroelectric materials have been considered for building analog memory for synaptic weights [2267, 2268, 2269, 2270, 2271, 2272], and synaptic devices [2273, 2274, 2275, 2276], including those based on ferroelectric memristors [2277, 2278, 2279]. They have primarily been investigated as threeterminal synaptic devices (as opposed other implementations that may be twoterminal). Threeterminal synaptic devices can realize learning processes such as STDP in the device itself [2273, 2278], rather than requiring additional circuitry to implement STDP.
Graphene has more recently been incorporated in neuromorphic systems in order to achieve more compact circuits. It has been utilized for both transistors [2280, 2281, 2282] and resistors [2283] for neuromorphic implementations and in full synapse implementations [2284, 2285].
Another material considered for some neuromorphic implementations is the carbon nanotube. Carbon nanotubes have been proposed for use in a variety of neuromorphic components, including dendrites on neurons [2286, 2287, 2288, 2289, 2290], synapses [2291, 2292, 2293, 2294, 235, 2295, 2296, 2297, 2298, 2299, 2300, 2301, 2302, 2303, 2304, 2305, 2306, 2307], and spiking neurons [139, 2308, 2309, 2310]. The reasons that carbon nanotubes have been utilized are that they can produce both the scale of neuromorphic systems (number of neurons and synapses) and density (in terms of synapses) that may be required for emulating or simulating biological neural systems. They have also been used to interact with living tissue, indicating that carbonnanotube based systems may be useful in prosthetic applications of neuromorphic systems [2297].
A variety of synaptic transistors have also been fabricated for neuromorphic implementations, including siliconbased synaptic transistors [2311, 2312] and oxidebased synaptic transistors [2313, 2314, 2315, 2316, 2317, 2318, 2319, 2320, 2321, 2322, 2323, 2324, 2325]. Organic electrochemical transistors [2326, 2327, 2328, 2329, 2330, 2331] and organic nanoparticle transistors [2332, 2333, 2334, 2335] have also been utilized to build neuromorphic components such as synapses. Similar to organic memristors, organic transistors are being pursued because of their lowcost processing and flexibility. Moreover, they are natural for implementations of brainmachine interfaces or any kind of chemical or biological sensor [2326]. Interestingly, groups are pursuing the development of transistors within polymer based membranes that can be used in neuromorphic applications such as biosensors [2336, 2337, 2338, 2339, 2340].
There is a very large amount of fascinating work being done in the materials science community to develop devices for neuromorphic systems out of novel materials in order to build smaller, faster, and more efficient neuromorphic devices. Different materials for even a single device implementation can have wildly different characteristics. These differences will propagate effects through the rest of the community, up through the device, highlevel hardware, supporting software, model and algorithms levels of neuromorphic systems. Thus, as a community, it is important that we understand what implications different materials may have on functionality, which will almost certainly require close collaborations with the materials science community moving forward.
VD Summary and Discussion
In this section, we have looked at hardware implementations at the full device level, at the device component level, and at the materials level. There is a significant body of work in each of these areas. At the system level, there are fully functional neuromorphic systems, including both programmable architectures such as FPGAs and FPAAs, as well as custom chip implementations that are digital, analog, or mixed analog/digital. A wide variety of novel device components beyond the basic circuit elements used in most device development have been utilized in neuromorphic systems. The most popular new component that is utilized is the memristor, but other device components are becoming popular, including other memory technologies such as CBRAM and phase change memory, as well as spinbased components, optical components, and floating gate transistors. There are also a large variety of materials being used to develop device components, and the properties of these materials will have fundamental effects on the way future neuromorphic systems will operate.
Vi Supporting Systems
In order for neuromorphic systems to be feasible as a complementary architecture for future computing, we must consider the supporting tools required for usability. Two of the key supporting systems for neuromorphic devices are communication frameworks and supporting software. In this section, we briefly discuss some of the work in these two areas.
Via Communication
Communication for neuromorphic systems includes both intrachip and interchip communication. Perhaps the most common implementation of interchip communication is address event representation (AER) [2343, 2344, 2345, 2346, 2347, 2348, 1785, 2349, 2350]. In AER communication, each neuron has a unique address, and when a spike is generated that will traverse between chips, the address specifies to which chip it will go. Custom PCI boards for AER have been implemented to optimize performance [2351, 2352]. Occasionally, interchip communication interfaces will have their own hardware implementations, usually in the form of FPGAs [2353, 2354, 2355, 2356, 2357]. SpiNNaker’s interconnect system is one of its most innovative components; the SpiNNaker chips are interconnected in a toroidal mesh, and an AER communication strategy for interchip communication is used [2358, 2359, 2360, 2361, 2362, 2363, 2364, 1585, 2365, 1587, 2366]. It is the communication framework for SpiNNaker that enables scalability, allowing tens of thousands of chips to be utilized together to simulate activity in a single network. A hierarchical version of AER utilizing a tree structure has also been implemented [2367, 2368]. One of the key components of AER communication is that it is asynchronous. In contrast, BrainScaleS utilizes an isynchronous interchip communication network, which means that events occur regularly [2369, 2370].
AER communication has also been utilized for onchip communication [2371, 2372], but it has been noted that there are limits of AER for onchip communication [2373]. As such, there are several other approaches that have been used to optimize intrachip communication. For example, in early work with neuromorphic systems, buses were utilized for some onchip communication systems [2374, 2375]. In a later work, one onchip communication optimization removed buses as part of the communication framework to improve performance [879]. Vainbrand and Ginosaur examined different networkonchip architectures for neural networks, including mesh, shared bus, tree, and pointtopoint, and found networkonchip multicast to give the highest performance [2376, 2377]. Ringbased communication for onchip communication has also been utilized successfully [2378, 2379]. Communication systems specifically for feedforward networks have also been studied [2380, 2381, 2382].
One of the common beyond Moore’s law era technologies to improve performance in communication that is being utilized across a variety of computing platforms (including traditional von Neumann computer systems) is threedimensional (3D) integration. 3D integration has been utilized in neuromorphic systems even from the early days of neuromorphic, especially for pattern recognition and object recognition tasks
[2383, 2384]. In more recent applications, 3D integration has been used in a similar way as it would be for von Neumann architectures, where memory is stacked with processing [1058]. It has also been utilized to stack neuromorphic chips. Through silicon vias (TSVs) are commonly used to physically implement 3D integration approaches for neuromorphic systems [2066, 2385, 2386, 2387, 1058], partially because utilizing TSVs in neuromorphic systems help mitigate some of the issues that arise with using TSVs, such as parasitic capacitance [2388]; however, other technologies have also been utilized in 3D integration, such as microbumps [2389]. 3D integration is commonly used in neuromorphic systems with a variety of other technologies, such as memristors [2390, 2391, 2392, 866, 2393], phase change memory [2093], and CMOSmolecular (CMOL) systems [2394].ViB Supporting Software
Supporting software will be a vital component in order for neuromorphic systems to be truly successful and accepted both within and outside the computing community. However, there has not been much focus on developing the appropriate tools for these systems. In this section, we discuss some efforts in developing supporting software systems for different neuromorphic implementations and usecases.
One important set of software tools consist of custom hardware synthesis tools [2395, 2396, 871, 628, 2397, 2398, 2399, 2400, 2401]. These synthesis tools typically take a relatively high level description and convert it to very low level representations of neural circuitry that can be used to implement neuromorphic systems. They tend to generate application specific circuits. That is, these tools are meant to work within the confines of a particular neuromorphic system, but also generate neuromorphic systems for particular applications.
A second set of software tools for neuromorphic systems are tools that are meant for programming existing neuromorphic systems. These fall into two primary categories: mapping and programming. Mapping tools are usually meant to take an existing neural network model representation, probably trained offline using existing methods such as backpropagation, and convert or map that neural network model to a particular neuromorphic architecture [414, 2402, 2403, 2404, 2405, 2406, 2407, 2408, 2409, 2410, 2411, 1588, 1589, 2412, 2413]. These tools typically take into account restrictions associated with the hardware, such as connectivity restrictions or parameter value bounds, and make appropriate adaptations to the network representation to work within those restrictions.
Programming tools, in contrast to mapping tools, are built so that a user can explicitly program a particular neuromorphic architecture [2414, 2415, 2342, 2416, 2417, 2418, 2419, 2420, 2421, 2422, 1515, 2423, 2424, 566]. These can allow the user to program at a low level by setting different parameter and topology configurations, or by utilizing custom training methods built specifically for a particular neuromorphic architecture. The Corelet paradigm used in TrueNorth programming fits into this category [2425]. Corelets are preprogrammed modules that accomplish different tasks. Corelets can be used as building blocks to program networks for TrueNorth that solve more complex tasks. There have also been some programming languages for neuromorphic systems such as PyNN [2426, 2427], PyNCS [2428], and even a neuromorphic instruction set architecture [2429]. These languages have been developed to allow users to describe and program neuromorphic systems at a highlevel.
Software simulators have also been key in developing usable neuromorphic systems [2430, 2431, 2065, 2342, 2432, 2433, 2407, 2434, 2435, 2436, 2417, 2437, 2438, 2439, 1587, 1515, 2440]. Softwarebased simulators are vital for verifying hardware performance, testing new potential hardware changes, and for development and use of training algorithms. If the hardware has not been widely deployed or distributed, software simulators can be key to developing a user base, even if the hardware has not been fabricated beyond simple prototypes. Visualization tools that show what is happening in neuromorphic systems can also be key to allowing users to understand how neuromorphic systems solve problems and to inspire further development within the field [2342, 2441, 2341, 2416, 2419]. These visualization tools are often used in combination with software simulations, and they can provide detailed information about what might be occurring at a lowlevel in the hardware. Figure 14 provides two examples of visualizations for neuromorphic systems.
ViC Summary
When building a neuromorphic system, it is extremely important to think about how the neuromorphic system will actually be used in real computing systems and with real users. Supporting systems, including communication onchip and between chips and supporting software, will be necessary to enable real utilization of neuromorphic systems. Compared to the number of hardware implementations of neuromorphic systems there are very few works that focus on the development of supporting software that will enable easeofuse for these systems. There is significantly more work to be done, especially on the supporting software side, within the field of neuromorphic computing. It is absolutely necessary that the community develop software tools alongside hardware moving forward.
Vii Applications
The “killer” applications for neuromorphic computing, or the applications that best showcase the capabilities of neuromorphic computers and devices, have yet to be determined. Obviously, various neural network types have been applied to a wide variety of applications, including image [2442], speech [2443], and data classification [2444], control [2445]
, and anomaly detection
[2446]. Implementing neural networks for these types of applications directly in hardware can potentially produce lower power, faster computation, and a smaller footprint than can be delivered on a von Neumann architecture. However, many of the application spaces that we will discuss in this section do not actually require any of those characteristics. In addition, spiking neural networks have not been studied to their full potential in the way that artificial neural networks have been, and it may be that physical neuromorphic hardware is required in order to determine what the killer applications for spiking neuromorphic systems will be moving forward. This goes handinhand with the appropriate algorithms being developed for neuromorphic systems, as discussed in Section IV. Here, we discuss a variety of applications of neuromorphic systems. We omit one of the major application areas, which is utilizing neuromorphic systems in order to study neuroscience via faster, more efficient simulation than is possible on traditional computing platforms. Instead, we focus on other realworld applications to which neuromorphic systems have been applied. The goal of this section is to provide a scope of the types of problems neuromorphic systems have successfully tackled, and to provide inspiration to the reader to apply neuromorphic systems to their own set of applications. Figure 15 gives an overview of the types of applications of neuromorphic systems and how popular they have been.There are a broad set of neuromorphic systems that have been developed entirely based on particular sensory systems, and applied to those particular application areas. The most popular of these “sensing” style implementations are visionbased systems, which often have architectures that are entirely based on replicating various characteristics of biological visual systems [2447, 1521, 1522, 1161, 1597, 1598, 1710, 2448, 1711, 2449, 324, 2450, 2451, 2292, 2452, 2453, 1054, 2454, 416, 2351, 1762, 2455, 2456, 1057, 2457, 2458, 2459, 2460, 2461, 510, 1523, 2371, 2462, 1767, 2463, 2464, 584, 2465, 1186, 2466, 2467, 2468, 2469, 2470, 2160, 1179, 1723, 2471, 2472, 2473, 2474, 2475, 2476, 2477, 1715, 2478, 1607, 2479, 1717, 1608, 344, 345, 2480, 346, 1459, 2481, 2482, 1456, 2483, 2484, 1524, 2485, 2486, 2487, 1718, 2488, 2489, 2490, 1612, 2491, 1613, 2165, 2492, 2493, 2494, 1725, 2495, 1615, 2496, 2497, 2498, 2499, 2500, 1727, 1525, 2501, 2502, 1643, 2503, 1432, 1644, 1774, 2504, 2505, 2506, 2507, 299, 1439, 2508, 469, 1738, 2509, 2510, 1881, 2511, 2512, 2513]. Though visionbased systems are by far the most popular sensorybased systems, there are also neuromorphic auditory systems [1269, 2441, 2514, 2463, 335, 1460, 1461, 2515, 1283, 2516, 1418, 2393, 127, 2517, 1434, 2518, 374, 2519, 299, 1439, 1462, 2520, 1649, 2521, 1744], olfactory systems [2522, 2523, 2524, 2525, 2526, 2527, 720, 2528, 2529, 2530, 2531, 1514, 1515], and somatosensory or touchbased systems [1520, 2532, 2533, 2534, 2535, 2536, 2537].
Another class of applications for neuromorphic systems are those that either interface directly with biological systems or are implanted or worn as part of other objects that are usually used for medical treatment or monitoring [2538]. A key feature of all of these applications is that they require devices that can be made very small and require very lowpower. Neuromorphic systems have become more popular in recent years in brainmachine interfaces [412, 2539, 2540, 2541, 237, 2538, 2542, 1625, 2543, 590, 2544, 2545]. By their very nature, spikebased neuromorphic models communicate using the same type of communication as biological systems, so they are a natural choice for brainmachine or braincomputer interfaces. Other wearable or implantable neuromorphic systems have been developed for pacemakers or defibrillator systems [651, 652, 653, 18, 602], retina implants [2546], wearable fall detectors for elderly users [1019], and prosthetics [2547].
Robotics applications are also very common for neuromorphic systems. Very small and power efficient systems are often required for autonomous robots. Many of the requirements for robotics, including motor control, are applications that have been successfully demonstrated in neural networks. Some of the common applications of neuromorphic systems for robotics include learning a particular behavior [2548, 2549], locomotion control or control of particular joints to achieve a certain motion [1885, 1261, 2550, 1279, 663, 1082, 1083, 1527, 67, 68, 69, 1784, 2551, 361], social learning [2552, 2553], and target or wall following [606, 1576, 2554, 1682, 498]. Thus far, in terms of robotics, the most common use of neuromorphic implementations is for autonomous navigation tasks [195, 2555, 1563, 1503, 2556, 1077, 1671, 1339, 2557, 1329, 1330, 1331, 1332, 1333, 393, 1716, 532, 2379, 2558, 2559, 1539, 486, 547, 2560, 16, 735]. In the same application space as robotics is the generation of motion through central pattern generators (CPGs). CPGs generate oscillatory motions, such as those used to generate swimming motions in lampreys or walking gaits. There are a variety of implementations of CPGs in neuromorphic systems [617, 618, 1622, 615, 616, 1084, 619, 1609, 1313, 1561, 1335, 1729, 1637, 1647, 1731, 1732, 1733, 1734, 1739, 1775].
Control tasks have been popular for neuromorphic systems because they typically require realtime performance, are often deployed in real systems that require small volume and low power, and have a temporal processing component, so they benefit from models that utilize recurrent connections or delays on synapses. A large variety of different control applications have utilized neuromorphic systems [863, 1085, 895, 920, 1017, 932, 687, 688, 872, 466, 962, 2561, 2562, 902, 903, 1051, 1660], but by far the most common control test case is the cartpole problem or the inverted pendulum task [1340, 512, 792, 2563, 902, 903, 1008, 487, 1337, 1045]. Neuromorphic systems have also been applied to video games, such as Pong [1563], PACMAN [2405], and Flappy Bird [1337].
An extremely common use of both neural networks and neuromorphic implementations has been on various imagebased applications, including edge detection [520, 339, 829, 2287, 783, 2564, 220, 922, 2107, 1260, 2118, 2141, 1873], image compression [875, 641, 1263, 1243, 721, 960], image filtering [338, 1255, 1551, 1886, 1887, 1516, 1779, 1112, 15, 1267, 2565, 2566, 1492], image segmentation [2567, 490, 1712, 1713, 141, 2568, 1679, 1273, 1274, 541, 921, 1278, 2569, 1277, 1272]
, and feature extraction
[388, 1502, 2065, 1269, 1270, 827, 392, 1658, 1606, 2570, 1165, 2571, 608, 1611, 1719, 854, 2572, 2132]. Image classification, detection, or recognition is an extremely popular application for neural networks and neuromorphic systems. The MNIST data set, subsets of the data set, and variations of the data set has been used to evaluate many neuromorphic implementations [264, 967, 941, 1213, 1906, 1907, 2080, 267, 1563, 2389, 1307, 1371, 862, 2100, 2101, 2102, 1378, 1055, 1379, 2573, 1827, 790, 2574, 1212, 2575, 2342, 2576, 584, 2577, 1305, 1306, 1390, 2224, 1063, 1064, 1065, 1922, 2568, 585, 1829, 2429, 1831, 2433, 2407, 2578, 1061, 1833, 395, 2300, 2579, 1835, 1301, 1840, 1842, 1845, 1846, 1848, 1567, 2241, 2580, 1818, 290, 2581, 1183, 1184, 1297, 491, 1298, 2418, 2215, 2582, 1205, 2583, 2584, 1800, 1427, 1801, 2130, 1852, 1214, 2585, 1853, 2115, 2116, 2122, 2137, 1433, 557, 2086, 1207, 2586, 2103, 2587, 297, 1855, 1856, 2588, 1593, 593, 1208, 2589, 1796, 2590, 211, 2591, 316, 1797, 936, 2592, 2209, 1903, 1904, 1870, 1873, 1877, 2593, 2594, 1296, 2126, 2143, 2413]. Other digit recognition tasks [2395, 2396, 788, 813, 1798, 1380, 2283, 1149, 2595, 1819, 706, 2596, 448, 1415, 2597, 1852, 852, 853, 798, 731, 1130, 1446, 1878, 2593, 1895, 2598, 2599] and general character recognition tasks [2600, 197, 2601, 1381, 2144, 946, 888, 670, 2602, 2603, 899, 989, 1356, 1830, 1312, 831, 832, 2070, 2604, 991, 757, 758, 2605, 1401, 438, 2606, 1402, 1304, 1157, 714, 1349, 2607, 2411, 1923, 2608, 1316, 1317, 2609, 545, 2585, 2135, 2128, 2140, 2141, 1435, 559, 1902, 1158, 2610, 730, 768, 1858, 1891, 1892, 1860, 1543, 2611, 478, 1558, 1022, 2350, 665, 2612, 1172, 1714, 992, 1552, 231] have also been very popular. Recognition of other patterns such as simple shapes or pixel patterns have also been used as applications for neuromorphic systems [308, 2202, 1287, 821, 2093, 598, 599, 580, 953, 805, 806, 217, 1289, 603, 528, 2613, 586, 1455, 1398, 1399, 807, 808, 2085, 1257, 2096, 2097, 1180, 533, 1113, 1152, 1839, 1153, 2112, 689, 1167, 446, 955, 2277, 1420, 765, 1851, 1423, 1426, 2614, 549, 1318, 1702, 1795, 375, 1857, 785, 119, 1441, 2392, 1957, 1445, 1867, 1868, 1879, 1880, 191].Other image classification tasks that have been demonstrated on neuromorphic systems include classifying real world images such as traffic signs
[1305, 1065, 1018], face recognition or detection
[1885, 582, 1067, 1284, 1204, 2615, 2130, 2585, 2591], car recognition or detection [1883], detecting air pollution in images [2616], detection of manufacturing defects or defaults [1692], hand gesture recognition [1038, 558], human recognition [1422], object texture analysis [2617], and other real world image recognition tasks [1037, 825, 1058, 2127]. There are several common realworld image data sets that have been evaluated on neuromorphic systems, including the CalTech101 data set [1068, 867], the Google StreetView House Number (SVHN) data set [2580, 2585, 2590, 316, 2591], the CIFAR10 data set [587, 1066, 592, 1069, 2130, 2590, 2591], and ImageNet
[1827]. Evaluations of how well neuromorphic systems implement AlexNet, a popular convolutional neural network architecture for image classification, have also been conducted [1056, 1074]. Examples of images from the MNIST data set, the CIFAR10, and the SVHN data set are given in Figure 16 to demonstrate the variety of images that neuromorphic systems have been used to successfully classify or recognize.A variety of soundbased recognition tasks have also been considered with neuromorphic systems. Speech recognition, for example, has been a common application for neuromorphic systems [2448, 628, 812, 976, 978, 502, 1665, 2621, 1546, 984, 1030, 2622, 754, 755, 2623, 1004, 1005, 1006, 1206, 1104, 868, 1048, 1463, 1859, 1861, 591, 1031, 2624, 1105]. Neuromorphic systems have also been applied to music recognition tasks [2425, 588, 584]. Both speech and music recognition tasks may require the ability to process temporal components, and may have realtime constraints. As such, neuromorphic systems that are based on recurrent or spiking neural networks that have an inherent temporal processing component are natural fits for these applications. Neuromorphic systems have also been used for other soundbased tasks, including speaker recognition [584], distinguishing voice activity from noise [592], and analyzing sound for identification purposes [336, 337]. Neuromorphic systems have also been applied to noise filtering applications as well to translate noisy speech or other signals into clean versions of the signal [624, 644, 648, 649].
Applications that utilize video have also been common uses of neuromorphic systems. The most common example for video is object recognition within video frames [2625, 1562, 1565, 2626, 2082, 2416, 2627, 2628, 2629, 794, 1568, 2630, 1242, 1073, 2088, 2631, 1439, 2089, 2090]. This application does not necessarily require a temporal component, as it can analyze video frames as images. However, some of the other applications in video do require a temporal component, including motion detection [2632, 572, 1768, 1769, 383], motion estimation [2633, 1704, 1031], motion tracking [2634, 905, 2635, 917], and activity recognition [2636, 916].
Neuromorphic systems have also been applied to natural language processing (NLP) tasks, many of which require recurrent networks. Example applications in NLP that have been demonstrated using neuromorphic systems include sentence construction
[388], sentence completion [2637, 1097, 2411], question subject classification [581][583], and document similarity [1239].Tasks that require realtime performance, ability to deploy into an environment with a small footprint, and/or low power are common use cases for neuromorphic systems. Smart sensors are one area of interest, including humidty sensors [756], light intensity sensors [1009], and sensors on mobile devices that can be used to classify and authenticate users [2638]. Similarly, anomaly detectors are also applications for neuromorphic systems, including detecting anomalies in traffic [1825], biological data [2639], and industrial data [2639], applications in cyber security [2640, 1807], and fault detection in diesel engines [931] and analog chips [1686, 1687].
General data classification using neuromorphic systems has also been popular. There are a variety of different and diverse application areas in this space to which neuromorphic systems have been applied, including accident diagnosis [1015], cereal grain identification [657, 659], computer user analysis [2641, 2642], driver drowsiness detection [2434], gas recognition or detection [622, 943, 972], product classification [781], hyperspectral data classification [750], stock price prediction [1537], wind velocity estimation [939], solder joint classification [1050], solar radiation detection [929], climate prediction [966], and applications within high energy physics [982, 1018]. Applications within the medical domain have also been popular, including coronary disease diagnosis [954], pulmonary disease diagnosis [887], deep brain sensor monitoring [404], DNA analysis [1146], heart arrhythmia detection [2434], analysis of electrocardiogram (ECG) [900, 1007, 922, 965], electroencephalogram (EEG) [1103, 1850, 1100, 403], and electromyogram (EMG) [1039, 1103] results, and pharmacology applications [2643]. A set of benchmarks from the UCI machine learning repository [2644] and/or the Proben1 data set [2444, 974, 908, 710] have been popular in both neural network and neuromorphic systems, including the following data sets: building [1840, 998], connect4 [1842, 1845], gene [1840, 1842, 1845, 998], glass [1338, 1055, 951], heart [2645, 998], ionosphere [1055, 1029, 2645], iris [573, 1504, 2646, 1055, 983, 305, 952, 2647, 1050, 612, 530, 1342, 1345, 1346, 575, 959, 1303, 215, 1336, 1295], lymphography [1842, 1845], mushroom [1840, 1842, 1845], phoneme [952], Pima Indian diabetes [1504, 1338, 2648, 998, 1336, 1027], semeion [983], thyroid [1840, 1842, 1845, 998], wine [2646, 1055, 1303, 1336], and Wisconsin breast cancer [573, 1338, 1340, 1029, 305, 1288, 612, 2645, 1842, 1845, 998, 959, 1303, 1336, 1295]. These data sets are especially useful because they have been widely used in the literature and can serve as points of comparison across both neuromorphic systems and neuromorphic models.
There are a set of relatively simple testing tasks that have been utilized in evaluating neuromorphic system behaviors, especially in the early development of new devices or architectures. In the early years of development, the two spirals problem was a common benchmark [2649, 2650, 2641, 2642, 1315]. bit parity tasks have also been commonly used [1883, 1884, 1885, 625, 2650, 2651, 657, 658, 659, 661, 217, 680, 1193, 696, 697, 1346, 1349, 2640, 209, 1092, 855, 737]. Basic function approximation has also been a common task for neuromorphic systems, where a mathematical function is specified and the neuromorphic system is trained to replicate its output behavior [614, 196, 2652, 971, 1024, 628, 873, 874, 2397, 1078, 1079, 2653, 643, 419, 420, 421, 2654, 819, 1538, 514, 2655, 983, 984, 2656, 518, 519, 2214, 2657, 1354, 1098, 1304, 715, 1053, 1148]. Temporal pattern recall or classification [605, 407, 411, 390, 1622, 595, 1377, 584, 2658, 341, 351, 355, 2401, 2104, 2659] and spatiotemporal pattern classification [2294, 1370, 1899, 1661, 104, 2416, 2660, 2661, 447, 1771, 1416, 1101, 2662, 2663, 2664, 2419, 2665, 2666, 382, 561, 473] have also been popular with neuromorphic systems, because they demonstrate the temporal processing characteristics of networks with recurrent connections and/or delays on the synapses. Finally, implementing various simple logic gates have been common tests for neural networks and neuromorphic systems, including AND [1224, 2255, 2295, 2667, 896, 1289, 1400, 834, 835, 2668, 1902, 1805, 746], OR [2255, 2295, 1289, 2668, 910, 1902, 1805, 1862], NAND [181, 2669, 2255, 1909, 2211, 864, 2295, 2670, 2301, 910, 1902, 1912], NOR [181, 2669, 2255, 1909, 2211, 864, 2295, 2670, 1400, 2301, 910, 1902, 1912], NOT [2667, 2670]
, and XNOR
[1400, 712, 2671, 1902, 1912]. XOR has been especially popular because the results are not linearly separable, which makes it a good test case for more complex neural network topologies [620, 2395, 2396, 628, 629, 633, 1504, 2672, 2673, 1340, 1310, 642, 647, 655, 946, 2641, 2642, 276, 1387, 865, 670, 949, 2674, 304, 2675, 952, 1289, 679, 1291, 954, 682, 1400, 2434, 1052, 2676, 834, 835, 2163, 1507, 2677, 2678, 2668, 234, 223, 693, 1517, 2679, 698, 699, 841, 1304, 1334, 702, 703, 704, 2680, 536, 97, 2681, 1343, 1026, 844, 845, 846, 847, 712, 713, 1347, 1089, 1614, 848, 956, 227, 1348, 957, 2278, 1271, 2379, 959, 2640, 1012, 723, 928, 2671, 726, 1362, 1902, 799, 737, 2624, 741, 1805, 1912, 746, 1128].In all of the applications discussed above, neuromorphic architectures are utilized primarily as neural networks. However, there are some works that propose utilizing neuromorphic systems for nonneural network applications. Graph algorithms have been one common area of application, as most neuromorphic architectures can be represented as graphs. Example graph algorithms include maximum cut [1200], minimum graph coloring [1156, 1698], traveling salesman [2682, 1147], and shortest path [1131]. Neuromorphic systems have also been used to simulate motion in flocks of birds [1570] and for optimization problems [1697]. Moving forward, we expect to see many more use cases of neuromorphic architectures in nonneural network applications, as neuromorphic computers become more widely available and are considered simply as a new type of computer with specific characteristics that are radically different from the characteristics traditional von Neumann architecture.
Viii Discussion: Neuromorphic Computing Moving Forward
In this work, we have discussed a variety of components of neuromorphic systems: models, algorithms, hardware in terms of full hardware systems, devicelevel components, new materials, supporting systems such as communication infrastructure and supporting software systems, and applications of neuromorphic systems. There has clearly been a tremendous amount of work up until this point in the field of neuromorphic computing and neural networks in hardware. Moving forward, there are several exciting research directions in a variety of fields that can help revolutionize how and why we will use neuromorphic computers in the future (Figure 17).
From the machine learning perspective, the most intriguing question is what the appropriate training and/or learning algorithms are for neuromorphic systems. Neuromorphic computing systems provide a platform for exploring different training and learning mechanisms on an accelerated scale. If utilized properly, we expect that neuromorphic computing devices could have a similar effect on increasing spiking neural network performance as GPUs had for deep learning networks. In other words, when the algorithm developer is not reliant on a slow simulation of the network and/or training method, there is much faster turn around in developing effective methods. However, in order for this innovation to take place, algorithm developers will need to be willing to look beyond traditional algorithms such as backpropagation and to think outside the von Neumann box. This will not be an easy task, but if successful, it will potentially revolutionize the way we think about machine learning and the types of applications to which neuromorphic computers can be applied.
From the device level perspective, the use of new and emerging technologies and materials for neuromorphic devices is one of the most exciting components of current neuromorphic computing research. With today’s capabilities in fabrication of nanoscale materials, many novel device components are certain to be developed. Neuromorphic computing researchers at all levels, including models and algorithms, should be collaborating with materials scientists as these new materials are developed in order to customize them for use in a variety of neuromorphic use cases. Not only is there potential for extremely small, ultrafast neuromorphic computers with new technologies, but we may be able to collaborate to build new composite materials that elicit behaviors that are specifically tailored for neuromorphic computation.
From the software engineering perspective, neuromorphic computers represent a new challenge in how to develop the supporting software systems that will be required for neuromorphic computers to be usable by nonexperts in the future. The neuromorphic computing community would greatly benefit from the inclusion of more software engineers as we continue to develop new neuromorphic devices moving forward, both to build supporting software for those systems, but also to inform the design themselves. Once again, neuromorphic computers require a totally different way of thinking than traditional von Neumann architectures. Building out programming languages specifically for neuromorphic devices wherein the device is not utilized as a neural network simulator but as a special type of computer with certain characteristics (e.g., massive parallelism and collocated memory and computation elements) is one way to begin to attract new users, and we believe such languages will be extremely beneficial moving forward.
From the enduser and applications perspective, there is much work that the neuromorphic community needs to do to develop and communicate use cases for neuromorphic systems. Some of those use cases include as a neuromorphic coprocessor in a future heterogeneous computer, as smart sensors or anomaly detectors in Internet of Things applications, as extremely low power and small footprint intelligent controllers in autonomous vehicles, as in situ data analysis platforms on deployed systems such as satellites, and many other application areas. The potential to utilize neuromorphic systems for realtime spatiotemporal data analysis or realtime control in a very efficient way needs to be communicated to the community at large, so that those that have these types of applications will think of neuromorphic computers as one solution to their computing needs.
There are clearly many exciting areas of development for neuromorphic computing. It is also clear that neuromorphic computers could play a major role in the future computing landscape if we continue to ramp up research at all levels of neuromorphic computing, from materials all the way up to algorithms and models. Neuromorphic computing research would benefit from coordination across all levels, and as a community, we should encourage that coordination to drive innovation in the field moving forward.
Ix Conclusion
In this work, we have given an overview of past work in neuromorphic computing. The motivations for building neuromorphic computers have changed over the years, but the need for a nonvon Neumann architecture that is lowpower, massively parallel, can perform in real time, and has the potential to train or learn in an online fashion is clear. We discussed the variety of neuron, synapse and network models that have been used in neuromorphic and neural network hardware in the past, emphasizing the wide variety of selections that can be made in determining how the neuromorphic system will function at an abstract level. It is not clear that this wide variety of models will ever be narrowed down to one all encompassing model in the future, as each model has its own strengths and weaknesses. As such, the neuromorphic computing landscape will likely continue to encompass everything from feedforward neural networks to highlydetailed biological neural network emulators.
We discussed the variety of training and learning algorithms that have been implemented on and for neuromorphic systems. Moving forward, we will need to address building training and learning algorithms specifically for neuromorphic systems, rather that adapting existing algorithms that were designed with an entirely different architecture in mind. There is great potential for innovation in this particular area of neuromorphic computing, and we believe that it is one of the areas for which innovation will have the most impact. We discussed highlevel hardware views of neuromorphic systems, as well as the novel devicelevel components and materials that are being used to implement them. There is also significant room for continued development in this area moving forward. We briefly discussed some of the supporting systems for neuromorphic computers, such as supporting software, of which there is relatively little and from which the community would greatly benefit. Finally, we discuss some of the applications to which neuromorphic computing systems have been successfully applied.
The goal of this paper was to give the reader a full view of the types of research that has been done in neuromorphic computing across a variety of fields. As such, we have included all of the references in this version. We hope that this work will inspire others to develop new and innovative systems to fill in the gaps with their own research in the field and to consider neuromorphic computers for their own applications.
Acknowledgment
This material is based upon work supported in part by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Reserach, under contract number DEAC0500OR22725. Research sponsored in part by the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UTBattelle, LLC, for the U. S. Department of Energy.
References
 [1] C. Mead, “Neuromorphic electronic systems,” Proceedings of the IEEE, vol. 78, no. 10, pp. 1629–1636, Oct 1990.
 [2] D. Monroe, “Neuromorphic computing gets ready for the (really) big time,” Communications of the ACM, vol. 57, no. 6, pp. 13–15, 2014.
 [3] J. Von Neumann and R. Kurzweil, The computer and the brain. Yale University Press, 2012.
 [4] A. M. Turing, “Computing machinery and intelligence,” Mind, vol. 59, no. 236, pp. 433–460, 1950.
 [5] A. F. Murray and A. V. Smith, “Asynchronous vlsi neural networks using pulsestream arithmetic,” SolidState Circuits, IEEE Journal of, vol. 23, no. 3, pp. 688–697, 1988.
 [6] F. Blayo and P. Hurat, “A vlsi systolic array dedicated to hopfield neural network,” in VLSI for Artificial Intelligence. Springer, 1989, pp. 255–264.
 [7] F. Salam, “A model of neural circuits for programmable vlsi implementation,” in Circuits and Systems, 1989., IEEE International Symposium on. IEEE, 1989, pp. 849–851.
 [8] S. Bibyk, M. Ismail, T. Borgstrom, K. Adkins, R. Kaul, N. Khachab, and S. Dupuie, “Currentmode neural network building blocks for analog mos vlsi,” in Circuits and Systems, 1990., IEEE International Symposium on. IEEE, 1990, pp. 3283–3285.
 [9] F. Distante, M. Sami, and G. S. Gajani, “A general configurable architecture for wsi implementation for neural nets,” in Wafer Scale Integration, 1990. Proceedings.,[2nd] International Conference on. IEEE, 1990, pp. 116–123.
 [10] J. B. Burr, “Digital neural network implementations,” Neural networks, concepts, applications, and implementations, vol. 3, pp. 237–285, 1991.
 [11] M. Chiang, T. Lu, and J. Kuo, “Analogue adaptive neural network circuit,” IEE Proceedings G (Circuits, Devices and Systems), vol. 138, no. 6, pp. 717–723, 1991.
 [12] K. Madani, P. Garda, E. Belhaire, and F. Devos, “Two analog counters for neural network implementation,” SolidState Circuits, IEEE Journal of, vol. 26, no. 7, pp. 966–974, 1991.
 [13] A. F. Murray, D. Del Corso, and L. Tarassenko, “Pulsestream vlsi neural networks mixing analog and digital techniques,” Neural Networks, IEEE Transactions on, vol. 2, no. 2, pp. 193–204, 1991.
 [14] P. Hasler and L. Akers, “Vlsi neural systems and circuits,” in Computers and Communications, 1990. Conference Proceedings., Ninth Annual International Phoenix Conference on. IEEE, 1990, pp. 31–37.
 [15] J.C. Lee and B. J. Sheu, “Parallel digital image restoration using adaptive vlsi neural chips,” in Computer Design: VLSI in Computers and Processors, 1990. ICCD’90. Proceedings, 1990 IEEE International Conference on. IEEE, 1990, pp. 126–129.
 [16] L. Tarassenko, M. Brownlow, G. Marshall, J. Tombs, and A. Murray, “Realtime autonomous robot navigation using vlsi neural networks,” in Advances in neural information processing systems, 1991, pp. 422–428.
 [17] L. Akers, M. Walker, D. Ferry, and R. Grondin, “A limitedinterconnect, highly layered synthetic neural architecture,” in VLSI for artificial intelligence. Springer, 1989, pp. 218–226.
 [18] P. H. Leong and M. A. Jabri, “A vlsi neural network for morphology classification,” in Neural Networks, 1992. IJCNN., International Joint Conference on, vol. 2. IEEE, 1992, pp. 678–683.
 [19] G. Cairns and L. Tarassenko, “Learning with analogue vlsp mlps,” in Microelectronics for Neural Networks and Fuzzy Systems, 1994., Proceedings of the Fourth International Conference on. IEEE, 1994, pp. 67–76.
 [20] H. Markram, “The human brain project,” Scientific American, vol. 306, no. 6, pp. 50–55, 2012.
 [21] J. Schemmel, D. Bruderle, A. Grubl, M. Hock, K. Meier, and S. Millner, “A waferscale neuromorphic hardware system for largescale neural modeling,” in Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on. IEEE, 2010, pp. 1947–1950.
 [22] J. Backus, “Can programming be liberated from the von neumann style?: a functional style and its algebra of programs,” Communications of the ACM, vol. 21, no. 8, pp. 613–641, 1978.
 [23] W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” The bulletin of mathematical biophysics, vol. 5, no. 4, pp. 115–133, 1943.
 [24] E. M. Izhikevich, “Which model to use for cortical spiking neurons?” IEEE transactions on neural networks, vol. 15, no. 5, pp. 1063–1070, 2004.
 [25] A. L. Hodgkin and A. F. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” The Journal of physiology, vol. 117, no. 4, p. 500, 1952.
 [26] A. Basu, C. Petre, and P. Hasler, “Bifurcations in a silicon neuron,” in Circuits and Systems, 2008. ISCAS 2008. IEEE International Symposium on. IEEE, 2008, pp. 428–431.
 [27] A. Basu, “Smallsignal neural models and their applications,” Biomedical Circuits and Systems, IEEE Transactions on, vol. 6, no. 1, pp. 64–75, 2012.
 [28] F. Castanos and A. Franci, “The transition between tonic spiking and bursting in a sixtransistor neuromorphic device,” in Electrical Engineering, Computing Science and Automatic Control (CCE), 2015 12th International Conference on. IEEE, 2015, pp. 1–6.
 [29] ——, “Implementing robust neuromodulation in neuromorphic circuits,” Neurocomputing, 2016.
 [30] S. P. DeWeerth, M. S. Reid, E. A. Brown, and R. J. Butera Jr, “A comparative analysis of multiconductance neuronal models in silico,” Biological cybernetics, vol. 96, no. 2, pp. 181–194, 2007.
 [31] D. Dupeyron, S. Le Masson, Y. Deval, G. Le Masson, and J.P. Dom, “A bicmos implementation of the hodgkinhuxley formalism,” in Microelectronics for Neural Networks, 1996., Proceedings of Fifth International Conference on. IEEE, 1996, pp. 311–316.
 [32] F. Grassia, T. Lévi, S. Saïghi, and T. Kohno, “Bifurcation analysis in a silicon neuron,” Artificial Life and Robotics, vol. 17, no. 1, pp. 53–58, 2012.
 [33] M. Grattarola, M. Bove, S. Martinoia, and G. Massobrio, “Silicon neuron simulation with spice: tool for neurobiology and neural networks,” Medical and Biological Engineering and Computing, vol. 33, no. 4, pp. 533–536, 1995.
 [34] A. M. Hegab, N. M. Salem, A. G. Radwan, and L. Chua, “Neuron model with simplified memristive ionic channels,” International Journal of Bifurcation and Chaos, vol. 25, no. 06, p. 1530017, 2015.
 [35] K. M. Hynna and K. Boahen, “Thermodynamically equivalent silicon models of voltagedependent ion channels,” Neural computation, vol. 19, no. 2, pp. 327–350, 2007.
 [36] S. Kanoh, M. Imai, and N. Hoshimiya, “Analog lsi neuron model inspired by biological excitable membrane,” Systems and Computers in Japan, vol. 36, no. 6, pp. 84–91, 2005.
 [37] T. Kohno and K. Aihara, “Mathematicalmodelbased design of silicon burst neurons,” Neurocomputing, vol. 71, no. 7, pp. 1619–1628, 2008.
 [38] ——, “A design method for analog and digital silicon neuronsmathematicalmodelbased method,” in Collective Dynamics: Topics on Competition and Cooperation in the Biosciences: A Selection of Papers in the Proceedings of the BIOCOMP2007 International Conference, vol. 1028, no. 1. AIP Publishing, 2008, pp. 113–128.
 [39] C. H. Lam, “Neuromorphic semiconductor memory,” in 3D Systems Integration Conference (3DIC), 2015 International. IEEE, 2015, pp. KN3–1.
 [40] S. Le Masson, A. Laflaquiere, T. Bal, and G. Le Masson, “Analog circuits for modeling biological neural networks: design and applications,” Biomedical Engineering, IEEE Transactions on, vol. 46, no. 6, pp. 638–645, 1999.
 [41] Q. Ma, M. R. Haider, V. L. Shrestha, and Y. Massoud, “Bursting hodgkin–huxley modelbased ultralowpower neuromimetic silicon neuron,” Analog Integrated Circuits and Signal Processing, vol. 73, no. 1, pp. 329–337, 2012.
 [42] ——, “Lowpower spikemode silicon neuron for capacitive sensing of a biosensor,” in Wireless and Microwave Technology Conference (WAMICON), 2012 IEEE 13th Annual. IEEE, 2012, pp. 1–4.
 [43] M. Mahowald and R. Douglas, “A silicon neuron,” 1991.
 [44] M. PARODI and M. STORACE, “On a circuit representation of the hodgkin and huxley nerve axon membrane equations,” International journal of circuit theory and applications, vol. 25, no. 2, pp. 115–124, 1997.
 [45] F. Pelayo, E. Ros, X. Arreguit, and A. Prieto, “Vlsi implementation of a neural model using spikes,” Analog Integrated Circuits and Signal Processing, vol. 13, no. 12, pp. 111–121, 1997.
 [46] C. Rasche, R. Douglas, and M. Mahowald, “Characterization of a silicon pyramidal neuron,” PROGRESS IN NEURAL PROCESSING, pp. 169–177, 1998.
 [47] S. Saïghi, L. Buhry, Y. Bornat, G. N’Kaoua, J. Tomas, and S. Renaud, “Adjusting the neurons models in neuromimetic ics using the voltageclamp technique,” in Circuits and Systems, 2008. ISCAS 2008. IEEE International Symposium on. IEEE, 2008, pp. 1564–1567.
 [48] M. Sekikawa, T. Kohno, and K. Aihara, “An integrated circuit design of a silicon neuron and its measurement results,” Artificial Life and Robotics, vol. 13, no. 1, pp. 116–119, 2008.
 [49] J. Shin and C. Koch, “Dynamic range and sensitivity adaptation in a silicon spiking neuron,” Neural Networks, IEEE Transactions on, vol. 10, no. 5, pp. 1232–1238, 1999.
 [50] M. F. Simoni and S. P. DeWeerth, “Adaptation in an avlsi model of a neuron,” in Circuits and Systems, 1998. ISCAS’98. Proceedings of the 1998 IEEE International Symposium on, vol. 3. IEEE, 1998, pp. 111–114.
 [51] M. F. Simoni, G. S. Cymbalyuk, M. Q. Sorensen, R. L. Calabrese, and S. P. DeWeerth, “Development of hybrid systems: Interfacing a silicon neuron to a leech heart interneuron,” Advances in neural information processing systems, pp. 173–179, 2001.
 [52] M. F. Simoni, G. S. Cymbalyuk, M. E. Sorensen, R. L. Calabrese, and S. P. DeWeerth, “A multiconductance silicon neuron with biologically matched dynamics,” Biomedical Engineering, IEEE Transactions on, vol. 51, no. 2, pp. 342–354, 2004.
 [53] J. Tomas, S. Saïghi, S. Renaud, J. Silver, and H. Barnaby, “A conductancebased silicon neuron with membranevoltage dependent temporal dynamics,” in NEWCAS Conference (NEWCAS), 2010 8th IEEE International. IEEE, 2010, pp. 377–380.
 [54] C. Toumazou, J. Georgiou, and E. Drakakis, “Currentmode analogue circuit representation of hodgkin and huxley neuron equations,” Electronics Letters, vol. 34, no. 14, pp. 1376–1377, 1998.
 [55] A. Borisyuk, “Morris–lecar model,” in Encyclopedia of Computational Neuroscience. Springer, 2015, pp. 1758–1764.
 [56] M. Gholami and S. Saeedi, “Digital cellular implementation of morrislecar neuron model,” in Electrical Engineering (ICEE), 2015 23rd Iranian Conference on. IEEE, 2015, pp. 1235–1239.
 [57] M. Hayati, M. Nouri, S. Haghiri, and D. Abbott, “Digital multiplierless realization of two coupled biological morrislecar neuron model,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 62, no. 7, pp. 1805–1814, July 2015.
 [58] K. Nakada, K. Miura, and T. Asai, “Silicon neuron design based on phase reduction analysis,” in Soft Computing and Intelligent Systems (SCIS) and 13th International Symposium on Advanced Intelligent Systems (ISIS), 2012 Joint 6th International Conference on. IEEE, 2012, pp. 1059–1062.
 [59] G. Patel and S. DeWeerth, “Analogue vlsi morrislecar neuron,” Electronics letters, vol. 33, no. 12, pp. 997–998, 1997.
 [60] G. N. Patel, G. S. Cymbalyuk, R. L. Calabrese, and S. P. DeWeerth, “Bifurcation analysis of a silicon neuron,” in Advances in Neural Information Processing Systems, 2000, pp. 731–737.
 [61] M. Sekerli and R. Butera, “An implementation of a simple neuron model in field programmable analog arrays,” in Engineering in Medicine and Biology Society, 2004. IEMBS’04. 26th Annual International Conference of the IEEE, vol. 2. IEEE, 2004, pp. 4564–4567.
 [62] S. Binczak, S. Jacquir, J.M. Bilbault, V. B. Kazantsev, and V. I. Nekorkin, “Experimental study of electrical fitzhugh–nagumo neurons with modified excitability,” Neural Networks, vol. 19, no. 5, pp. 684–693, 2006.
 [63] J. Cosp, S. Binczak, J. Madrenas, and D. Fernández, “Implementation of compact vlsi fitzhughnagumo neurons,” in Circuits and Systems, 2008. ISCAS 2008. IEEE International Symposium on. IEEE, 2008, pp. 2370–2373.
 [64] B. LinaresBarranco, E. SánchezSinencio, A. RodríguezVázquez, and J. L. Huertas, “A cmos implementation of fitzhughnagumo neuron model,” SolidState Circuits, IEEE Journal of, vol. 26, no. 7, pp. 956–965, 1991.
 [65] M. Hayati, M. Nouri, D. Abbott, and S. Haghiri, “Digital multiplierless realization of twocoupled biological hindmarsh–rose neuron model,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 63, no. 5, pp. 463–467, 2016.
 [66] Y. J. Lee, J. Lee, Y.B. Kim, J. Ayers, A. Volkovskii, A. Selverston, H. Abarbanel, and M. Rabinovich, “Low power real time electronic neuron vlsi design using subthreshold technique,” in Circuits and Systems, 2004. ISCAS’04. Proceedings of the 2004 International Symposium on, vol. 4. IEEE, 2004, pp. IV–744.
 [67] J. Lu, Y.B. Kim, and J. Ayers, “A low power 65nm cmos electronic neuron and synapse design for a biomimetic microrobot,” in Circuits and Systems (MWSCAS), 2011 IEEE 54th International Midwest Symposium on. IEEE, 2011, pp. 1–4.
 [68] J. Lu, J. Yang, Y.B. Kim, and K. K. Kim, “Implementation of cmos neuron for robot motion control unit,” in SoC Design Conference (ISOCC), 2013 International. IEEE, 2013, pp. 9–12.
 [69] J. Lu, J. Yang, Y.B. Kim, J. Ayers, and K. K. Kim, “Implementation of excitatory cmos neuron oscillator for robot motion control unit,” JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, vol. 14, no. 4, pp. 383–390, 2014.
 [70] E. M. Izhikevich, “Simple model of spiking neurons,” IEEE Transactions on neural networks, vol. 14, no. 6, pp. 1569–1572, 2003.
 [71] O. O. Dutra, G. D. Colleta, L. H. Ferreira, and T. C. Pimenta, “A subthreshold halo implanted mos implementation of izhikevich neuron model,” in SOI3DSubthreshold Microelectronics Technology Unified Conference (S3S), 2013 IEEE. IEEE, 2013, pp. 1–2.
 [72] N. Mizoguchi, Y. Nagamatsu, K. Aihara, and T. Kohno, “A twovariable silicon neuron circuit based on the izhikevich model,” Artificial Life and Robotics, vol. 16, no. 3, pp. 383–388, 2011.
 [73] N. Ning, G. Li, W. He, K. Huang, L. Pan, K. Ramanathan, R. Zhao, L. Shi et al., “Modeling neuromorphic persistent firing networks,” International Journal of Intelligence Science, vol. 5, no. 02, p. 89, 2015.
 [74] S. Ozoguz et al., “A low power vlsi implementation of the izhikevich neuron model,” in New Circuits and Systems Conference (NEWCAS), 2011 IEEE 9th International. IEEE, 2011, pp. 169–172.
 [75] E. Radhika, S. Kumar, and A. Kumari, “Low power analog vlsi implementation of cortical neuron with threshold modulation,” in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on. IEEE, 2015, pp. 561–566.
 [76] V. Rangan, A. Ghosh, V. Aparin, and G. Cauwenberghs, “A subthreshold avlsi implementation of the izhikevich simple neuron model,” in Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of the IEEE. IEEE, 2010, pp. 4164–4167.
 [77] O. Sharifipoor and A. Ahmadi, “An analog implementation of biologically plausible neurons using ccii building blocks,” Neural Networks, vol. 36, pp. 129–135, 2012.
 [78] H. Soleimani, A. Ahmadi, M. Bavandpour, and O. Sharifipoor, “A generalized analog implementation of piecewise linear neuron models using ccii building blocks,” Neural Networks, vol. 51, pp. 26–38, 2014.
 [79] J. H. Wijekoon and P. Dudek, “Simple analogue vlsi circuit of a cortical neuron,” in Electronics, Circuits and Systems, 2006. ICECS’06. 13th IEEE International Conference on. IEEE, 2006, pp. 1344–1347.
 [80] ——, “Compact silicon neuron circuit with spiking and bursting behaviour,” Neural Networks, vol. 21, no. 2, pp. 524–534, 2008.
 [81] ——, “Integrated circuit implementation of a cortical neuron,” in Circuits and Systems, 2008. ISCAS 2008. IEEE International Symposium on. IEEE, 2008, pp. 1784–1787.
 [82] ——, “A cmos circuit implementation of a spiking neuron with bursting and adaptation on a biological timescale,” in Biomedical Circuits and Systems Conference, 2009. BioCAS 2009. IEEE. IEEE, 2009, pp. 193–196.
 [83] Ş. Mihalaş and E. Niebur, “A generalized linear integrateandfire neural model produces diverse spiking behaviors,” Neural computation, vol. 21, no. 3, pp. 704–718, 2009.
 [84] F. Folowosele, R. EtienneCummings, and T. J. Hamilton, “A cmos switched capacitor implementation of the mihalasniebur neuron,” in Biomedical Circuits and Systems Conference, 2009. BioCAS 2009. IEEE. IEEE, 2009, pp. 105–108.
 [85] F. Folowosele, T. J. Hamilton, and R. EtienneCummings, “Silicon modeling of the mihalaş–niebur neuron,” Neural Networks, IEEE Transactions on, vol. 22, no. 12, pp. 1915–1927, 2011.
 [86] F. Grassia, T. Levi, T. Kohno, and S. Saïghi, “Silicon neuron: digital hardware implementation of the quartic model,” Artificial Life and Robotics, vol. 19, no. 3, pp. 215–219, 2014.
 [87] J. V. Arthur and K. Boahen, “Silicon neurons that inhibit to synchronize,” in Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on. IEEE, 2006, pp. 4–pp.
 [88] ——, “Siliconneuron design: A dynamical systems approach,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 58, no. 5, pp. 1034–1043, 2011.
 [89] R. Wang, T. J. Hamilton, J. Tapson, and A. van Schaik, “A generalised conductancebased silicon neuron for largescale spiking neural networks,” in Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. IEEE, 2014, pp. 1564–1567.
 [90] R. Wang, C. S. Thakur, T. J. Hamilton, J. Tapson, and A. van Schaik, “A compact avlsi conductancebased silicon neuron,” in Biomedical Circuits and Systems Conference (BioCAS), 2015 IEEE. IEEE, 2015, pp. 1–4.
 [91] J. H. Wittig and K. Boahen, “Silicon neurons that phaselock,” in Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on. IEEE, 2006, pp. 4–pp.
 [92] A. Basu and P. E. Hasler, “Nullclinebased design of a silicon neuron,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 57, no. 11, pp. 2938–2947, 2010.
 [93] A. Basu, C. Petre, and P. E. Hasler, “Dynamics and bifurcations in a silicon neuron,” Biomedical Circuits and Systems, IEEE Transactions on, vol. 4, no. 5, pp. 320–328, 2010.
 [94] K. Hynna and K. Boahen, “Spacerate coding in an adaptive silicon neuron,” Neural Networks, vol. 14, no. 6, pp. 645–656, 2001.
 [95] K. M. Hynna and K. Boahen, “Neuronal ionchannel dynamics in silicon,” in Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on. IEEE, 2006, pp. 4–pp.
 [96] ——, “Silicon neurons that burst when primed,” in Circuits and Systems, 2007. ISCAS 2007. IEEE International Symposium on. IEEE, 2007, pp. 3363–3366.
 [97] J. L. Meador and C. S. Cole, “A lowpower cmos circuit which emulates temporal electrical properties of neurons,” in Advances in neural information processing systems, 1989, pp. 678–686.
 [98] C. Rasche and R. Douglas, “An improved silicon neuron,” Analog integrated circuits and signal processing, vol. 23, no. 3, pp. 227–236, 2000.
 [99] J. G. Elias, H.H. Chu, and S. M. Meshreki, “Silicon implementation of an artificial dendritic tree,” in Neural Networks, 1992. IJCNN., International Joint Conference on, vol. 1. IEEE, 1992, pp. 154–159.
 [100] J. G. Elias, H.H. Chu, and S. Meshreki, “A neuromorphic impulsive circuit for processing dynamic signals,” in Circuits and Systems, 1992. ISCAS’92. Proceedings., 1992 IEEE International Symposium on, vol. 5. IEEE, 1992, pp. 2208–2211.
 [101] J. G. Elias and D. P. Northmore, “Programmable dynamics in an analog vlsi neuromorph,” in Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on, vol. 4. IEEE, 1994, pp. 2028–2033.
 [102] V. Gorelik, “Silicon approximation to biological neuron,” in Neural Networks, 2003. Proceedings of the International Joint Conference on, vol. 2. IEEE, 2003, pp. 965–970.
 [103] P. Hasler, S. Kozoil, E. Farquhar, and A. Basu, “Transistor channel dendrites implementing hmm classifiers,” in Circuits and Systems, 2007. ISCAS 2007. IEEE International Symposium on. IEEE, 2007, pp. 3359–3362.
 [104] S. Hussain and A. Basu, “Morphological learning in multicompartment neuron model with binary synapses,” in Circuits and Systems (ISCAS), 2016 IEEE International Symposium on. IEEE, 2016, pp. 2527–2530.
 [105] U. Koch and M. Brunner, “A modular analog neuronmodel for research and teaching,” Biological cybernetics, vol. 59, no. 45, pp. 303–312, 1988.
 [106] B. A. Minch, P. Hasler, C. Diorio, and C. Mead, “A silicon axon,” Advances in neural information processing systems, pp. 739–746, 1995.

[107]
C. Rasche and R. J. Douglas, “Forwardand backpropagation in a silicon dendrite,”
Neural Networks, IEEE Transactions on, vol. 12, no. 2, pp. 386–393, 2001.  [108] C. Rasche, “An a vlsi basis for dendritic adaptation,” Circuits and Systems II: Analog and Digital Signal Processing, IEEE Transactions on, vol. 48, no. 6, pp. 600–605, 2001.
 [109] R. Wang, C. T. Jin, A. L. McEwan, and A. Van Schaik, “A programmable axonal propagation delay circuit for timedelay spiking neural networks.” in ISCAS, 2011, pp. 869–872.
 [110] R. Wang, G. Cohen, T. J. Hamilton, J. Tapson, and A. van Schaik, “An improved avlsi axon with programmable delay using spike timing dependent delay plasticity,” in Circuits and Systems (ISCAS), 2013 IEEE International Symposium on. IEEE, 2013, pp. 1592–1595.
 [111] M. Hayati, M. Nouri, S. Haghiri, and D. Abbott, “A digital realization of astrocyte and neural glial interactions,” Biomedical Circuits and Systems, IEEE Transactions on, vol. PP, no. 99, pp. 1–1, 2015.
 [112] Y. IrizarryValle, A. C. Parker, and J. Joshi, “A cmos neuromorphic approach to emulate neuroastrocyte interactions,” in Neural Networks (IJCNN), The 2013 International Joint Conference on. IEEE, 2013, pp. 1–7.
 [113] Y. IrizarryValle and A. C. Parker, “An astrocyte neuromorphic circuit that influences neuronal phase synchrony.” IEEE transactions on biomedical circuits and systems, vol. 9, no. 2, pp. 175–187, 2015.
 [114] M. Ranjbar and M. Amiri, “An analog astrocyte–neuron interaction circuit for neuromorphic applications,” Journal of Computational Electronics, vol. 14, no. 3, pp. 694–706, 2015.
 [115] ——, “Analog implementation of neuron–astrocyte interaction in tripartite synapse,” Journal of Computational Electronics, pp. 1–13, 2015.
 [116] H. Soleimani, M. Bavandpour, A. Ahmadi, and D. Abbott, “Digital implementation of a biological astrocyte model and its application,” Neural Networks and Learning Systems, IEEE Transactions on, vol. 26, no. 1, pp. 127–139, 2015.
 [117] O. Erdener and S. Ozoguz, “A new neuron model suitable for low power vlsi implementation,” in 2015 9th International Conference on Electrical and Electronics Engineering (ELECO). IEEE, 2015, pp. 15–19.
 [118] Ö. Erdener and S. Ozoguz, “A new neuron and synapse model suitable for low power vlsi implementation,” Analog Integrated Circuits and Signal Processing, vol. 89, no. 3, pp. 749–770, 2016.
 [119] A. Upegui, C. A. PeñaReyes, and E. Sanchez, “A functional spiking neuron hardware oriented model,” in Computational Methods in Neural Modeling. Springer, 2003, pp. 136–143.
 [120] A. Upegui, C. A. PeñaReyes, and E. Sánchez, “A hardware implementation of a network of functional spiking neurons with hebbian learning,” in Biologically Inspired Approaches to Advanced Information Technology. Springer, 2004, pp. 233–243.
 [121] T. Kohno, J. Li, and K. Aihara, “Silicon neuronal networks towards brainmorphic computers,” Nonlinear Theory and Its Applications, IEICE, vol. 5, no. 3, pp. 379–390, 2014.
 [122] T. Kohno and K. Aihara, “A qualitativemodelingbased lowpower silicon nerve membrane,” in Electronics, Circuits and Systems (ICECS), 2014 21st IEEE International Conference on. IEEE, 2014, pp. 199–202.
 [123] T. Kohno, M. Sekikawa, and K. Aihara, “A configurable qualitativemodelingbased silicon neuron circuit,” Nonlinear Theory and Its Applications, IEICE, vol. 8, no. 1, pp. 25–37, 2017.
 [124] E. Farquhar and P. Hasler, “A biophysically inspired silicon neuron,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 52, no. 3, pp. 477–488, 2005.
 [125] T. Kohno and K. Aihara, “A mathematicalstructurebased avlsi silicon neuron model,” in Proceedings of the 2010 International Symposium on Nonlinear Theory and its Applications, 2010, pp. 261–264.
 [126] G. Massobrio, P. Massobrio, and S. Martinoia, “Modeling and simulation of silicon neurontoisfet junction,” Journal of Computational Electronics, vol. 6, no. 4, pp. 431–437, 2007.
 [127] K. Saeki, R. Iidaka, Y. Sekine, and K. Aihara, “Hardware neuron models with cmos for auditory neural networks,” in Neural Information Processing, 2002. ICONIP’02. Proceedings of the 9th International Conference on, vol. 3. IEEE, 2002, pp. 1325–1329.
 [128] K. Saeki, Y. Hayashi, and Y. Sekine, “Extraction of phase information buried in fluctuation of a pulsetype hardware neuron model using stdp,” in Neural Networks, 2006. IJCNN’06. International Joint Conference on. IEEE, 2006, pp. 1505–1510.
 [129] A. Tete, A. Deshmukh, P. Bajaj, and A. Keskar, “Design of cortical neuron circuits with vlsi design approach,” J. Soft Computing, vol. 2, no. 4, 2011.
 [130] J. H. Wijekoon and P. Dudek, “Spiking and bursting firing patterns of a compact vlsi cortical neuron circuit,” in Neural Networks, 2007. IJCNN 2007. International Joint Conference on. IEEE, 2007, pp. 1332–1337.
 [131] L. Wenpeng, C. Xu, and L. Huaxiang, “A new hardwareoriented spiking neuron model based on set and its properties,” Physics Procedia, vol. 22, pp. 170–176, 2011.
 [132] W. Gerstner and W. M. Kistler, Spiking neuron models: Single neurons, populations, plasticity. Cambridge university press, 2002.
 [133] S. A. Aamir, P. Müller, A. Hartel, J. Schemmel, and K. Meier, “A highly tunable 65nm cmos lif neuron for a large scale neuromorphic system,” in European SolidState Circuits Conference, ESSCIRC Conference 2016: 42nd. IEEE, 2016, pp. 71–74.
 [134] T. Asai, Y. Kanazawa, and Y. Amemiya, “A subthreshold mos neuron circuit based on the volterra system,” Neural Networks, IEEE Transactions on, vol. 14, no. 5, pp. 1308–1312, 2003.
 [135] A. Bindal and S. HamediHagh, “The design of a new spiking neuron using dual work function silicon nanowire transistors,” Nanotechnology, vol. 18, no. 9, p. 095201, 2007.
 [136] ——, “An integrate and fire spiking neuron using silicon nanowire technology,” Proc. of Nano Sci. and Technol. Inst.(NSTI), San Jose, California, 2007.
 [137] J. A. Bragg, E. A. Brown, P. Hasler, and S. P. DeWeerth, “A silicon model of an adapting motoneuron,” in Circuits and Systems, 2002. ISCAS 2002. IEEE International Symposium on, vol. 4. IEEE, 2002, pp. IV–261.
 [138] Y. Chen, S. Hall, L. McDaid, O. Buiu, and P. M. Kelly, “Analog spiking neuron with chargecoupled synapses.” in World Congress on Engineering. Citeseer, 2007, pp. 440–444.
 [139] C. Chen, K. Kim, Q. Truong, A. Shen, Z. Li, and Y. Chen, “A spiking neuron circuit based on a carbon nanotube transistor,” Nanotechnology, vol. 23, no. 27, p. 275202, 2012.
 [140] P.Y. Chen, J.s. Seo, Y. Cao, and S. Yu, “Compact oscillation neuron exploiting metalinsulatortransition for neuromorphic computing,” in Proceedings of the 35th International Conference on ComputerAided Design. ACM, 2016, p. 15.
 [141] G. Crebbin, “Image segmentation using neuromorphic integrateandfire cells,” in Information, Communications and Signal Processing, 2005 Fifth International Conference on. IEEE, 2005, pp. 305–309.
 [142] L. Deng, D. Wang, G. Li, Z. Zhang, and J. Pei, “A new computing rule for neuromorphic engineering,” in 2015 15th NonVolatile Memory Technology Symposium (NVMTS). IEEE, 2015, pp. 1–3.
 [143] F. Folowosele, A. Harrison, A. Cassidy, A. G. Andreou, R. EtienneCummings, S. Mihalas, E. Niebur, and T. J. Hamilton, “A switched capacitor implementation of the generalized linear integrateandfire neuron,” in Circuits and Systems, 2009. ISCAS 2009. IEEE International Symposium on. IEEE, 2009, pp. 2149–2152.
 [144] T. J. Hamilton and A. Van Schaik, “Silicon implementation of the generalized integrateandfire neuron model,” in Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2011 Seventh International Conference on. IEEE, 2011, pp. 108–112.
 [145] T. Iguchi, A. Hirata, and H. Torikai, “Integrateandfiretype digital spiking neuron and its learning for spikepatterndivision multiplex communication,” in Neural Networks (IJCNN), The 2010 International Joint Conference on. IEEE, 2010, pp. 1–8.
 [146] G. Indiveri, “A lowpower adaptive integrateandfire neuron circuit,” in ISCAS (4), 2003, pp. 820–823.
 [147] G. Indiveri, F. Stefanini, and E. Chicca, “Spikebased learning with a generalized integrate and fire silicon neuron,” in Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on. IEEE, 2010, pp. 1951–1954.
 [148] G. Lecerf, J. Tomas, S. Boyn, S. Girod, A. Mangalore, J. Grollier, and S. Saighi, “Silicon neuron dedicated to memristive spiking neural networks,” in Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. IEEE, 2014, pp. 1568–1571.
 [149] V. Kornijcuk, H. Lim, J. Y. Seok, G. Kim, S. K. Kim, I. Kim, B. J. Choi, and D. S. Jeong, “Leaky integrateandfire neuron circuit based on floatinggate integrator,” Frontiers in neuroscience, vol. 10, 2016.
 [150] K. Kravtsov, M. P. Fok, D. Rosenbluth, and P. R. Prucnal, “Ultrafast alloptical implementation of a leaky integrateandfire neuron,” Optics express, vol. 19, no. 3, pp. 2133–2147, 2011.
 [151] M.Z. Li, P. PingWang, K.T. Tang, and W.C. Fang, “Multiinput silicon neuron with weighting adaptation,” in Life Science Systems and Applications Workshop, 2009. LiSSA 2009. IEEE/NIH. IEEE, 2009, pp. 194–197.
 [152] H. Lim, V. Kornijcuk, J. Y. Seok, S. K. Kim, I. Kim, C. S. Hwang, and D. S. Jeong, “Reliability of neuronal information conveyed by unreliable neuristorbased leaky integrateandfire neurons: a model study,” Scientific reports, vol. 5, 2015.
 [153] S.C. Liu and B. A. Minch, “Homeostasis in a silicon integrate and fire neuron,” Advances in Neural Information Processing Systems, pp. 727–733, 2001.
 [154] S.C. Liu, “A widefield directionselective avlsi spiking neuron,” in Circuits and Systems, 2003. ISCAS’03. Proceedings of the 2003 International Symposium on, vol. 5. IEEE, 2003, pp. V–829.
 [155] P. Livi and G. Indiveri, “A currentmode conductancebased silicon neuron for addressevent neuromorphic systems,” in Circuits and systems, 2009. ISCAS 2009. IEEE international symposium on. IEEE, 2009, pp. 2898–2901.
 [156] M. Mattia and S. Fusi, “Modeling networks with vlsi (linear) integrateandfire neurons,” in Losanna, Switzerland: Proceedings of the 7th international conference on artificial neural networks. Citeseer, 1997.
 [157] T. Morie, “Cmos circuits and nanodevices for spike based neural computing,” in Future of Electron Devices, Kansai (IMFEDK), 2015 IEEE International Meeting for. IEEE, 2015, pp. 112–113.
 [158] D. B.d. Rubin, E. Chicca, and G. Indiveri, “Characterizing the firing proprieties of an adaptive analog vlsi neuron,” in In MM Auke Jan Ijspeert & N. Wakamiya (Eds.), Biologically Inspired Approaches to Advanced Information Technology First International Workshop, Bioadit 2004. Citeseer, 2004.
 [159] A. Russell and R. EtienneCummings, “Maximum likelihood parameter estimation of a spiking silicon neuron,” in Circuits and Systems (ISCAS), 2011 IEEE International Symposium on. IEEE, 2011, pp. 669–672.
 [160] A. Russell, K. Mazurek, S. Mihalas, E. Niebur, and R. EtienneCummings, “Parameter estimation of a spiking silicon neuron,” Biomedical Circuits and Systems, IEEE Transactions on, vol. 6, no. 2, pp. 133–141, 2012.
 [161] S. Srivastava and S. Rathod, “Silicon neuronanalog cmos vlsi implementation and analysis at 180nm,” in Devices, Circuits and Systems (ICDCS), 2016 3rd International Conference on. IEEE, 2016, pp. 28–32.
 [162] O. Torres, J. Eriksson, J. M. Moreno, and A. Villa, “Hardware optimization of a novel spiking neuron model for the poetic tissue.” in Artificial Neural Nets Problem Solving Methods. Springer, 2003, pp. 113–120.
 [163] R. Wang, J. Tapson, T. J. Hamilton, and A. Van Schaik, “An avlsi programmable axonal delay circuit with spike timing dependent delay adaptation,” in Circuits and Systems (ISCAS), 2012 IEEE International Symposium on. IEEE, 2012, pp. 2413–2416.
 [164] S. Wolpert and E. MicheliTzanakou, “A neuromime in vlsi,” Neural Networks, IEEE Transactions on, vol. 7, no. 2, pp. 300–306, 1996.
 [165] E. J. Basham and D. W. Parent, “An analog circuit implementation of a quadratic integrate and fire neuron,” in Engineering in Medicine and Biology Society, 2009. EMBC 2009. Annual International Conference of the IEEE. IEEE, 2009, pp. 741–744.
 [166] ——, “Compact digital implementation of a quadratic integrateandfire neuron,” in Engineering in Medicine and Biology Society (EMBC), 2012 Annual International Conference of the IEEE. IEEE, 2012, pp. 3543–3548.
 [167] S. Abbas and C. Muthulakshmi, “Neuromorphic implementation of adaptive exponential integrate and fire neuron,” in Communication and Network Technologies (ICCNT), 2014 International Conference on. IEEE, 2014, pp. 233–237.
 [168] S. Millner, A. Grübl, K. Meier, J. Schemmel, and M.O. Schwartz, “A vlsi implementation of the adaptive exponential integrateandfire neuron model,” in Advances in Neural Information Processing Systems, 2010, pp. 1642–1650.
 [169] S. Hashimoto and H. Torikai, “A novel hybrid spiking neuron: Bifurcations, responses, and onchip learning,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 57, no. 8, pp. 2168–2181, 2010.
 [170] T. Hishiki and H. Torikai, “Bifurcation analysis of a resonateandfiretype digital spiking neuron,” in Neural Information Processing. Springer, 2009, pp. 392–400.
 [171] ——, “Neural behaviors and nonlinear dynamics of a rotateandfire digital spiking neuron,” in Neural Networks (IJCNN), The 2010 International Joint Conference on. IEEE, 2010, pp. 1–8.
 [172] ——, “A novel rotateandfire digital spiking neuron and its neuronlike bifurcations and responses,” Neural Networks, IEEE Transactions on, vol. 22, no. 5, pp. 752–767, 2011.
 [173] T. Matsubara and H. Torikai, “Dynamic response behaviors of a generalized asynchronous digital spiking neuron model,” in Neural Information Processing. Springer, 2011, pp. 395–404.
 [174] ——, “A novel asynchronous digital spiking neuron model and its various neuronlike bifurcations and responses,” in Neural Networks (IJCNN), The 2011 International Joint Conference on. IEEE, 2011, pp. 741–748.
 [175] H. Torikai, H. Hamanaka, and T. Saito, “Reconfigurable digital spiking neuron and its pulsecoupled network: Basic characteristics and potential applications,” Circuits and Systems II: Express Briefs, IEEE Transactions on, vol. 53, no. 8, pp. 734–738, 2006.
 [176] H. Torikai, Y. Shimizu, and T. Saito, “Various spiketrains from a digital spiking neuron: Analysis of interspike intervals and their modulation,” in Neural Networks, 2006. IJCNN’06. International Joint Conference on. IEEE, 2006, pp. 3860–3867.
 [177] H. Torikai, A. Funew, and T. Saito, “Approximation of spiketrains by digital spiking neuron,” in Neural Networks, 2007. IJCNN 2007. International Joint Conference on. IEEE, 2007, pp. 2677–2682.
 [178] ——, “Digital spiking neuron and its learning for approximation of various spiketrains,” Neural Networks, vol. 21, no. 2, pp. 140–149, 2008.
 [179] H. Torikai and S. Hashimoto, “A hardwareoriented learning algorithm for a digital spiking neuron,” in Neural Networks, 2008. IJCNN 2008.(IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on. IEEE, 2008, pp. 2472–2479.
 [180] C. Cerkez, I. Aybay, and U. Halici, “A digital neuron realization for the random neural network model,” in Neural Networks, 1997., International Conference on, vol. 2. IEEE, 1997, pp. 1000–1004.
 [181] S. Aunet, B. Oelmann, S. Abdalla, and Y. Berg, “Reconfigurable subthreshold cmos perceptron,” in Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on, vol. 3. IEEE, 2004, pp. 1983–1988.
 [182] S. Aunet, B. Oelmann, P. A. Norseng, and Y. Berg, “Realtime reconfigurable subthreshold cmos perceptron,” Neural Networks, IEEE Transactions on, vol. 19, no. 4, pp. 645–657, 2008.
 [183] V. Bohossian, P. Hasler, and J. Bruck, “Programmable neural logic,” Components, Packaging, and Manufacturing Technology, Part B: Advanced Packaging, IEEE Transactions on, vol. 21, no. 4, pp. 346–351, 1998.
 [184] S. Draghici, D. Miller et al., “A vlsi neural network classifier based on integervalued weights,” in Neural Networks, 1999. IJCNN’99. International Joint Conference on, vol. 4. IEEE, 1999, pp. 2419–2424.
 [185] B. Taheri et al., “Proposed cmos vlsi implementation of an electronic neuron using multivalued signal processing,” in MultipleValued Logic, 1991., Proceedings of the TwentyFirst International Symposium on. IEEE, 1991, pp. 203–209.
 [186] V. Varshavsky, “Cmos artificial neuron on the base of driven threshold element,” in Systems, Man, and Cybernetics, 1998. 1998 IEEE International Conference on, vol. 2. IEEE, 1998, pp. 1857–1861.
 [187] V. Varshavsky and V. Marakhovsky, “Learning experiments with cmos artificial neuron,” in Computational Intelligence. Springer, 1999, pp. 706–707.
 [188] ——, “Betacmos artificial neuron and implementability limits,” in Engineering Applications of BioInspired Artificial Neural Networks. Springer, 1999, pp. 117–128.
 [189] ——, “Implementability restrictions of the betacmos artificial neuron,” in Electronics, Circuits and Systems, 1999. Proceedings of ICECS’99. The 6th IEEE International Conference on, vol. 1. IEEE, 1999, pp. 401–405.
 [190] V. Varshavsky, V. Marakhovsky, and H. Saito, “Cmos implementation of an artificial neuron training on logical threshold functions,” WSEAS Transaction on Circuits and Systems, no. 4, pp. 370–390, 2009.
 [191] B. Zamanlooy and M. Mirhassani, “Efficient hardware implementation of threshold neural networks,” in New Circuits and Systems Conference (NEWCAS), 2012 IEEE 10th International. IEEE, 2012, pp. 1–4.
 [192] S. A. AlKazzaz and R. A. Khalil, “Fpga implementation of artificial neurons: Comparison study,” in Information and Communication Technologies: From Theory to Applications, 2008. ICTTA 2008. 3rd International Conference on. IEEE, 2008, pp. 1–6.
 [193] S. Azizian, K. Fathi, B. Mashoufi, and F. Derogarian, “Implementation of a programmable neuron in 0.35m cmos process for multilayer ann applications,” in EUROCONInternational Conference on Computer as a Tool (EUROCON), 2011 IEEE. IEEE, 2011, pp. 1–4.
 [194] M. BañuelosSaucedo, J. CastilloHernández, S. QuintanaThierry, R. DamiánZamacona, J. ValerianoAssem, R. Cervantes, R. FuentesGonzález, G. CalvaOlmos, and J. PérezSilva, “Implementation of a neuron model using fpgas,” Journal of Applied Research and Technology, vol. 1, no. 03, 2003.
 [195] M. Acconcia Dias, D. Oliva Sales, and F. S. Osorio, “Automatic generation of luts for hardware neural networks,” in Intelligent Systems (BRACIS), 2014 Brazilian Conference on. IEEE, 2014, pp. 115–120.
 [196] M. AlNsour and H. S. AbdelAtyZohdy, “Implementation of programmable digital sigmoid function circuit for neurocomputing,” in Circuits and Systems, 1998. Proceedings. 1998 Midwest Symposium on. IEEE, 1998, pp. 571–574.
 [197] A. Basaglia, W. Fornaciari, and F. Salice, “Correct implementation of digital neural networks,” in Circuits and Systems, 1995., Proceedings., Proceedings of the 38th Midwest Symposium on, vol. 1. IEEE, 1995, pp. 81–84.
 [198] X. Chen, G. Wang, W. Zhou, S. Chang, and S. Sun, “Efficient sigmoid function for neural networks based fpga design,” in Intelligent Computing. Springer, 2006, pp. 672–677.
 [199] I. del Campo, R. Finker, J. Echanobe, and K. Basterretxea, “Controlled accuracy approximation of sigmoid function for efficient fpgabased implementation of artificial neurons,” Electronics Letters, vol. 49, no. 25, pp. 1598–1600, 2013.
 [200] S. Jeyanthi and M. Subadra, “Implementation of single neuron using various activation functions with fpga,” in Advanced Communication Control and Computing Technologies (ICACCCT), 2014 International Conference on. IEEE, 2014, pp. 1126–1131.
 [201] G. Khodabandehloo, M. Mirhassani, and M. Ahmadi, “Analog implementation of a novel resistivetype sigmoidal neuron,” Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, vol. 20, no. 4, pp. 750–754, 2012.
 [202] D. E. Khodja, A. Kheldoun, and L. Refoufi, “Sigmoid function approximation for ann implementation in fpga devices,” in Proc. of the 9th WSEAS Int. Conf. On Circuits, Systems, Electronics, Control, and Signal Processing, Stevens point, WI, 2010.
 [203] D. Larkin, A. Kinane, V. Muresan, and N. O?Connor, “An efficient hardware architecture for a neural network activation function generator,” in Advances in Neural NetworksISNN 2006. Springer, 2006, pp. 1319–1327.
 [204] A. Mishra, K. Raj et al., “Implementation of a digital neuron with nonlinear activation function using piecewise linear approximation technique,” in Microelectronics, 2007. ICM 2007. Internatonal Conference on. IEEE, 2007, pp. 69–72.
 [205] D. Myers and R. Hutchinson, “Efficient implementation of piecewise linear activation function for digital vlsi neural networks,” Electronics Letters, vol. 25, p. 1662, 1989.
 [206] F. OrtegaZamorano, J. M. Jerez, G. Juarez, J. O. Perez, and L. Franco, “High precision fpga implementation of neural network activation functions,” in Intelligent Embedded Systems (IES), 2014 IEEE Symposium on. IEEE, 2014, pp. 55–60.

[207]
M. Panicker and C. Babu, “Efficient fpga implementation of sigmoid and bipolar sigmoid activation functions for multilayer perceptrons,”
IOSR Journal of Engineering (IOSRJEN), pp. 1352–1356, 2012.  [208] I. Sahin and I. Koyuncu, “Design and implementation of neural networks neurons with radbas, logsig, and tansig activation functions on fpga,” Elektronika ir Elektrotechnika, vol. 120, no. 4, pp. 51–54, 2012.
 [209] V. Saichand, D. Nirmala, S. Arumugam, and N. Mohankumar, “Fpga realization of activation function for artificial neural networks,” in Intelligent Systems Design and Applications, 2008. ISDA’08. Eighth International Conference on, vol. 3. IEEE, 2008, pp. 159–164.
 [210] T. Szabo, G. Horv et al., “An efficient hardware implementation of feedforward neural networks,” Applied Intelligence, vol. 21, no. 2, pp. 143–158, 2004.
 [211] C.H. Tsai, Y.T. Chih, W. Wong, and C.Y. Lee, “A hardwareefficient sigmoid function with adjustable precision for neural network system,” 2015.
 [212] B. M. Wilamowski, J. Binfet, and M. Kaynak, “Vlsi implementation of neural networks,” International journal of neural systems, vol. 10, no. 03, pp. 191–197, 2000.
 [213] D. Baptista and F. MorgadoDias, “Lowresource hardware implementation of the hyperbolic tangent for artificial neural networks,” Neural Computing and Applications, vol. 23, no. 34, pp. 601–607, 2013.
 [214] C. W. Lin and J. S. Wang, “A digital circuit design of hyperbolic tangent sigmoid function for neural networks,” in Circuits and Systems, 2008. ISCAS 2008. IEEE International Symposium on. IEEE, 2008, pp. 856–859.
 [215] P. Santos, D. OuelletPoulin, D. Shapiro, and M. Bolic, “Artificial neural network acceleration on fpga using custom instruction,” in Electrical and Computer Engineering (CCECE), 2011 24th Canadian Conference on. IEEE, 2011, pp. 000 450–000 455.
 [216] B. Zamanlooy and M. Mirhassani, “Efficient vlsi implementation of neural networks with hyperbolic tangent activation function,” Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, vol. 22, no. 1, pp. 39–48, 2014.
 [217] H. Hikawa, “A digital hardware pulsemode neuron with piecewise linear activation function,” Neural Networks, IEEE Transactions on, vol. 14, no. 5, pp. 1028–1037, 2003.
 [218] J. Faridi, M. S. Ansari, and S. A. Rahman, “A neuromorphic majority function circuit with o (n) area complexity in 180 nm cmos,” in Proceedings of the International Conference on Data Engineering and Communication Technology. Springer, 2017, pp. 473–480.
 [219] X. Zhu, J. Shen, B. Chi, and Z. Wang, “Circuit implementation of multithresholded neuron (mtn) using bicmos technology,” in Neural Networks, 2005. IJCNN’05. Proceedings. 2005 IEEE International Joint Conference on, vol. 1. IEEE, 2005, pp. 627–632.
 [220] C. Merkel, D. Kudithipudi, and N. Sereni, “Periodic activation functions in memristorbased analog neural networks,” in Neural Networks (IJCNN), The 2013 International Joint Conference on. IEEE, 2013, pp. 1–7.
 [221] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by backpropagating errors,” Cognitive modeling, vol. 5, no. 3, p. 1, 1988.
 [222] C. Lu and B. Shi, “Circuit design of an adjustable neuron activation function and its derivative,” Electronics Letters, vol. 36, no. 6, pp. 553–555, 2000.
 [223] ——, “Circuit realization of a programmable neuron transfer function and its derivative,” in ijcnn. IEEE, 2000, p. 4047.
 [224] A. Armato, L. Fanucci, E. P. Scilingo, and D. De Rossi, “Lowerror digital hardware implementation of artificial neuron activation functions and their derivative,” Microprocessors and Microsystems, vol. 35, no. 6, pp. 557–567, 2011.
 [225] K. Basterretxea, J. Tarela, and I. Del Campo, “Approximation of sigmoid function and the derivative for hardware implementation of artificial neurons,” IEE ProceedingsCircuits, Devices and Systems, vol. 151, no. 1, pp. 18–24, 2004.
 [226] V. Beiu, J. Peperstraete, J. Vandewalle, and R. Lauwereins, “Closse approximations of sigmoid functions by sum of step for vlsi implementation of neural networks,” Sci. Ann. Cuza Univ., vol. 3, pp. 5–34, 1994.
 [227] P. Murtagh and A. Tsoi, “Implementation issues of sigmoid function and its derivative for vlsi digital neural networks,” IEE Proceedings E (Computers and Digital Techniques), vol. 139, no. 3, pp. 207–214, 1992.
 [228] K. AlRuwaihi, “Cmos analogue neurone circuit with programmable activation functions utilising mos transistors with optimised process/device parameters,” IEE ProceedingsCircuits, Devices and Systems, vol. 144, no. 6, pp. 318–322, 1997.
 [229] S. Lee and K. Lau, “Low power building block for artificial neural networks,” Electronics Letters, vol. 31, no. 19, pp. 1618–1619, 1995.
 [230] A. Deshmukh, J. Morghade, A. Khera, and P. Bajaj, “Binary neural networks–a cmos design approach,” in KnowledgeBased Intelligent Information and Engineering Systems. Springer, 2005, pp. 1291–1296.
 [231] T. Yamakawa, “Silicon implementation of a fuzzy neuron,” Fuzzy Systems, IEEE Transactions on, vol. 4, no. 4, pp. 488–501, 1996.
 [232] M. AboElsoud, “Analog circuits for electronic neural network,” in Circuits and Systems, 1992., Proceedings of the 35th Midwest Symposium on. IEEE, 1992, pp. 5–8.
 [233] P. W. Hollis and J. J. Paulos, “An analog bicmos hopfield neuron,” in Analog VLSI Neural Networks. Springer, 1993, pp. 11–17.
 [234] B. Liu, S. Konduri, R. Minnich, and J. Frenzel, “Implementation of pulsed neural networks in cmos vlsi technology,” in Proceedings of the 4th WSEAS International Conference on Signal Processing, Robotics and Automation. World Scientific and Engineering Academy and Society (WSEAS), 2005, p. 20.
 [235] A. K. Friesz, A. C. Parker, C. Zhou, K. Ryu, J. M. Sanders, H.S. P. Wong, and J. Deng, “A biomimetic carbon nanotube synapse circuit,” in Biomedical Engineering Society (BMES) Annual Fall Meeting, vol. 2, no. 8, 2007, p. 29.
 [236] C. Gordon, E. Farquhar, and P. Hasler, “A family of floatinggate adapting synapses based upon transistor channel models,” in Circuits and Systems, 2004. ISCAS’04. Proceedings of the 2004 International Symposium on, vol. 1. IEEE, 2004, pp. I–317.
 [237] C. Gordon, A. Preyer, K. Babalola, R. J. Butera, and P. Hasler, “An artificial synapse for interfacing to biological neurons,” in Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on. IEEE, 2006, pp. 4–pp.
 [238] A. Kazemi, A. Ahmadi, S. Alirezaee, and M. Ahmadi, “A modified synapse model for neuromorphic circuits,” in 2016 IEEE 7th Latin American Symposium on Circuits & Systems (LASCAS). IEEE, 2016, pp. 67–70.
 [239] H. You and D. Wang, “Neuromorphic implementation of attractor dynamics in decision circuit with nmdars,” in Circuits and Systems (ISCAS), 2016 IEEE International Symposium on. IEEE, 2016, pp. 369–372.
 [240] E. Lazaridis, E. M. Drakakis, and M. Barahona, “A biomimetic cmos synapse,” in Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on. IEEE, 2006, pp. 4–pp.
 [241] S. Thanapitak and C. Toumazou, “A bionics chemical synapse,” IEEE transactions on biomedical circuits and systems, vol. 7, no. 3, pp. 296–306, 2013.
 [242] M. Noack, C. Mayr, J. Partzsch, and R. Schuffny, “Synapse dynamics in cmos derived from a model of neurotransmitter release,” in Circuit Theory and Design (ECCTD), 2011 20th European Conference on. IEEE, 2011, pp. 198–201.
 [243] S. Pradyumna and S. Rathod, “Analysis of cmos inhibitory synapse with varying neurotransmitter concentration, reuptake time and spread delay,” in VLSI Design and Test (VDAT), 2015 19th International Symposium on. IEEE, 2015, pp. 1–5.
 [244] ——, “Analysis of cmos synapse generating excitatory postsynaptic potential using dc control voltages,” in Communication Technologies (GCCT), 2015 Global Conference on. IEEE, 2015, pp. 433–436.
 [245] J. H. Wijekoon and P. Dudek, “Analogue cmos circuit implementation of a dopamine modulated synapse,” in Circuits and Systems (ISCAS), 2011 IEEE International Symposium on. IEEE, 2011, pp. 877–880.
 [246] C. Bartolozzi and G. Indiveri, “Synaptic dynamics in analog vlsi,” Neural computation, vol. 19, no. 10, pp. 2581–2603, 2007.
 [247] B. V. Benjamin, J. V. Arthur, P. Gao, P. Merolla, and K. Boahen, “A superposable silicon synapse with programmable reversal potential,” in Engineering in Medicine and Biology Society (EMBC), 2012 Annual International Conference of the IEEE. IEEE, 2012, pp. 771–774.
 [248] M. Noack, M. Krause, C. Mayr, J. Partzsch, and R. Schuffny, “Vlsi implementation of a conductancebased multisynapse using switchedcapacitor circuits,” in Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. IEEE, 2014, pp. 850–853.
 [249] C. Rasche and R. J. Douglas, “Silicon synaptic conductances,” Journal of computational neuroscience, vol. 7, no. 1, pp. 33–39, 1999.
 [250] R. Z. Shi and T. K. Horiuchi, “A summating, exponentiallydecaying cmos synapse for spiking neural systems,” in Advances in neural information processing systems, 2003, p. None.
 [251] T. Yu, S. Joshi, V. Rangan, and G. Cauwenberghs, “Subthreshold mos dynamic translinear neural and synaptic conductance,” in Neural Engineering (NER), 2011 5th International IEEE/EMBS Conference on. IEEE, 2011, pp. 68–71.
 [252] S.J. Choi, G.B. Kim, K. Lee, K.H. Kim, W.Y. Yang, S. Cho, H.J. Bae, D.S. Seo, S.I. Kim, and K.J. Lee, “Synaptic behaviors of a single metal–oxide–metal resistive device,” Applied Physics A, vol. 102, no. 4, pp. 1019–1025, 2011.
 [253] T. Chou, J.C. Liu, L.W. Chiu, I. Wang, C.M. Tsai, T.H. Hou et al., “Neuromorphic pattern learning using hbm electronic synapse with excitatory and inhibitory plasticity,” in VLSI Technology, Systems and Application (VLSITSA), 2015 International Symposium on. IEEE, 2015, pp. 1–2.
 [254] E. Covi, S. Brivio, M. Fanciulli, and S. Spiga, “Synaptic potentiation and depression in al: Hfo 2based memristor,” Microelectronic Engineering, vol. 147, pp. 41–44, 2015.
 [255] S. Desbief, A. Kyndiah, D. Guerin, D. Gentili, M. Murgia, S. Lenfant, F. Alibart, T. Cramer, F. Biscarini, and D. Vuillaume, “Low voltage and time constant organic synapsetransistor,” Organic Electronics, vol. 21, pp. 47–53, 2015.
 [256] R. Gopalakrishnan and A. Basu, “Robust doublet stdp in a floatinggate synapse,” in Neural Networks (IJCNN), 2014 International Joint Conference on. IEEE, 2014, pp. 4296–4301.
 [257] S.C. Liu, “Analog vlsi circuits for shortterm dynamic synapses,” EURASIP Journal on Applied Signal Processing, vol. 2003, pp. 620–628, 2003.
 [258] M. Noack, C. Mayr, J. Partzsch, M. Schultz, and R. Schüffny, “A switchedcapacitor implementation of shortterm synaptic dynamics,” in Mixed Design of Integrated Circuits and Systems (MIXDES), 2012 Proceedings of the 19th International Conference. IEEE, 2012, pp. 214–218.
 [259] S. Ramakrishnan, P. E. Hasler, and C. Gordon, “Floating gate synapses with spiketimedependent plasticity,” Biomedical Circuits and Systems, IEEE Transactions on, vol. 5, no. 3, pp. 244–252, 2011.
 [260] M. Suri, V. Sousa, L. Perniola, D. Vuillaume, and B. DeSalvo, “Phase change memory for synaptic plasticity application in neuromorphic systems,” in Neural Networks (IJCNN), The 2011 International Joint Conference on. IEEE, 2011, pp. 619–624.
 [261] M. Suri, O. Bichler, Q. Hubert, L. Perniola, V. Sousa, C. Jahan, D. Vuillaume, C. Gamrat, and B. DeSalvo, “Interface engineering of pcm for improved synaptic performance in neuromorphic systems,” in Memory Workshop (IMW), 2012 4th IEEE International. IEEE, 2012, pp. 1–4.
 [262] A. D. Tete, A. Deshmukh, P. Bajaj, and A. G. Keskar, “Design of dynamic synapse circuits with vlsi design approach,” in Emerging Trends in Engineering and Technology (ICETET), 2010 3rd International Conference on. IEEE, 2010, pp. 707–711.
 [263] Y. Dan and M.m. Poo, “Spike timingdependent plasticity of neural circuits,” Neuron, vol. 44, no. 1, pp. 23–30, 2004.
 [264] S. Afshar, L. George, C. S. Thakur, J. Tapson, A. van Schaik, P. de Chazal, and T. J. Hamilton, “Turn down that noise: Synaptic encoding of afferent snr in a single spiking neuron,” Biomedical Circuits and Systems, IEEE Transactions on, vol. 9, no. 2, pp. 188–196, 2015.
 [265] S. Ambrogio, S. Balatti, F. Nardi, S. Facchinetti, and D. Ielmini, “Spiketiming dependent plasticity in a transistorselected resistive switching memory,” Nanotechnology, vol. 24, no. 38, p. 384012, 2013.
 [266] S. Ambrogio, S. Balatti, V. Milo, R. Carboni, Z.Q. Wang, A. Calderoni, N. Ramaswamy, and D. Ielmini, “Neuromorphic learning and recognition with onetransistoroneresistor synapses and bistable metal oxide rram,” IEEE Transactions on Electron Devices, vol. 63, no. 4, pp. 1508–1515, 2016.
 [267] S. Ambrogio, S. Balatti, V. Milo, R. Carboni, Z. Wang, A. Calderoni, N. Ramaswamy, and D. Ielmini, “Novel rramenabled 1t1r synapse capable of lowpower stdp via burstmode communication and realtime unsupervised machine learning,” in VLSI Technology, 2016 IEEE Symposium on. IEEE, 2016, pp. 1–2.
 [268] M. R. Azghadi, O. Kavehei, S. AlSarawi, N. Iannella, and D. Abbott, “Novel vlsi implementation for tripletbased spiketiming dependent plasticity,” in Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2011 Seventh International Conference on. IEEE, 2011, pp. 158–162.
 [269] M. R. Azghadi, S. AlSarawi, D. Abbott, and N. Iannella, “A neuromorphic vlsi design for spike timing and rate based synaptic plasticity,” Neural Networks, vol. 45, pp. 70–82, 2013.
 [270] M. R. Azghadi, S. AlSarawi, N. Iannella, and D. Abbott, “A new compact analog vlsi model for spike timing dependent plasticity,” in Very Large Scale Integration (VLSISoC), 2013 IFIP/IEEE 21st International Conference on. IEEE, 2013, pp. 7–12.
 [271] M. R. Azghadi, N. Iannella, S. AlSarawi, and D. Abbott, “Tunable low energy, compact and high performance neuromorphic circuit for spikebased synaptic plasticity,” PloS one, vol. 9, no. 2, p. e88326, 2014.
 [272] S. A. Bamford, A. F. Murray, and D. J. Willshaw, “Spiketimingdependent plasticity with weight dependence evoked from physical constraints,” Biomedical Circuits and Systems, IEEE Transactions on, vol. 6, no. 4, pp. 385–398, 2012.
 [273] C. Bartolozzi and G. Indiveri, “Global scaling of synaptic efficacy: Homeostasis in silicon synapses,” Neurocomputing, vol. 72, no. 4, pp. 726–731, 2009.
 [274] M. Boegerhausen, P. Suter, and S.C. Liu, “Modeling shortterm synaptic depression in silicon,” Neural Computation, vol. 15, no. 2, pp. 331–348, 2003.
 [275] A. Cassidy, A. G. Andreou, and J. Georgiou, “A combinational digital logic approach to stdp,” in Circuits and Systems (ISCAS), 2011 IEEE International Symposium on. IEEE, 2011, pp. 673–676.
 [276] S. Dytckov, M. Daneshtalab, M. Ebrahimi, H. Anwar, J. Plosila, and H. Tenhunen, “Efficient stdp microarchitecture for silicon spiking neural networks,” in Digital System Design (DSD), 2014 17th Euromicro Conference on. IEEE, 2014, pp. 496–503.
 [277] S. Fusi, M. Annunziato, D. Badoni, A. Salamon, and D. J. Amit, “Spikedriven synaptic plasticity: theory, simulation, vlsi implementation,” Neural Computation, vol. 12, no. 10, pp. 2227–2258, 2000.
 [278] R. Gopalakrishnan and A. Basu, “Triplet spike time dependent plasticity in a floatinggate synapse,” in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on. IEEE, 2015, pp. 710–713.
 [279] ——, “On the nonstdp behavior and its remedy in a floatinggate synapse,” Neural Networks and Learning Systems, IEEE Transactions on, vol. 26, no. 10, pp. 2596–2601, 2015.
 [280] Y. Hayashi, K. Saeki, and Y. Sekine, “A synaptic circuit of a pulsetype hardware neuron model with stdp,” in International Congress Series, vol. 1301. Elsevier, 2007, pp. 132–135.
 [281] T. Hindo, “Weight updating floatinggate synapse,” Electronics Letters, vol. 50, no. 17, pp. 1190–1191, 2014.
 [282] G. Indiveri, “Circuits for bistable spiketimingdependent plasticity neuromorphic vlsi synapses,” Advances in Neural Information Processing Systems, vol. 15, 2002.
 [283] Y. IrizarryValle, A. C. Parker, and N. M. Grzywacz, “An adaptable cmos depressing synapse with detection of changes in input spike rate,” in Circuits and Systems (LASCAS), 2014 IEEE 5th Latin American Symposium on. IEEE, 2014, pp. 1–4.
 [284] V. Kornijcuk, O. Kavehei, H. Lim, J. Y. Seok, S. K. Kim, I. Kim, W.S. Lee, B. J. Choi, and D. S. Jeong, “Multiprotocolinduced plasticity in artificial synapses,” Nanoscale, vol. 6, no. 24, pp. 15 151–15 160, 2014.
 [285] S.C. Liu and R. Mockel, “Temporally learning floatinggate vlsi synapses,” in Circuits and Systems, 2008. ISCAS 2008. IEEE International Symposium on. IEEE, 2008, pp. 2154–2157.
 [286] C. Mayr, M. Noack, J. Partzsch, and R. Schuffny, “Replicating experimental spike and rate based neural learning in cmos,” in Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on. IEEE, 2010, pp. 105–108.
 [287] C. Mayr, J. Partzsch, M. Noack, and R. Schüffny, “Live demonstration: Multipletimescale plasticity in a neuromorphic system.” in ISCAS, 2013, pp. 666–670.
 [288] J. Meador, D. Watola, and N. Nintunze, “Vlsi implementation of a pulse hebbian learning law,” in Circuits and Systems, 1991., IEEE International Sympoisum on. IEEE, 1991, pp. 1287–1290.
 [289] S. Mitra, G. Indiveri, and R. EtienneCummings, “Synthesis of logdomain integrators for silicon synapses with global parametric control,” in Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on. IEEE, 2010, pp. 97–100.
 [290] G. Narasimman, S. Roy, X. Fong, K. Roy, C.H. Chang, and A. Basu, “A lowvoltage, low power stdp synapse implementation using domainwall magnets for spiking neural networks,” in Circuits and Systems (ISCAS), 2016 IEEE International Symposium on. IEEE, 2016, pp. 914–917.
 [291] M. Noack, J. Partzsch, C. Mayr, S. Henker, and R. Schuffny, “Biologyderived synaptic dynamics and optimized system architecture for neuromorphic hardware,” in Mixed Design of Integrated Circuits and Systems (MIXDES), 2010 Proceedings of the 17th International Conference. IEEE, 2010, pp. 219–224.
 [292] G. Rachmuth, H. Z. Shouval, M. F. Bear, and C.S. Poon, “A biophysicallybased neuromorphic model of spike rateand timingdependent plasticity,” Proceedings of the National Academy of Sciences, vol. 108, no. 49, pp. E1266–E1274, 2011.
 [293] H. Ramachandran, S. Weber, S. A. Aamir, and E. Chicca, “Neuromorphic circuits for shortterm plasticity with recovery control,” in Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. IEEE, 2014, pp. 858–861.
 [294] K. Saeki, R. Shimizu, and Y. Sekine, “Pulsetype hardware neural network with two time windows in stdp,” in Advances in NeuroInformation Processing. Springer, 2009, pp. 877–884.
 [295] A. ShahimAeen and G. Karimi, “Tripletbased spike timing dependent plasticity (tstdp) modeling using vhdlams,” Neurocomputing, vol. 149, pp. 1440–1444, 2015.
 [296] A. W. Smith, L. McDaid, and S. Hall, “A compact spiketimingdependentplasticity circuit for floating gate weight implementation.” Neurocomputing, vol. 124, pp. 210–217, 2014.
 [297] G. Srinivasan, A. Sengupta, and K. Roy, “Magnetic tunnel junction based longterm shortterm stochastic synapse for a spiking neural network with onchip stdp learning,” Scientific Reports, vol. 6, 2016.
 [298] D. Sumislawska, N. Qiao, M. Pfeiffer, and G. Indiveri, “Wide dynamic range weights and biologically realistic synaptic dynamics for spikebased learning circuits,” in Circuits and Systems (ISCAS), 2016 IEEE International Symposium on. IEEE, 2016, pp. 2491–2494.
 [299] M. Suri, O. Bichler, D. Querlioz, G. Palma, E. Vianello, D. Vuillaume, C. Gamrat, and B. DeSalvo, “Cbram devices as binary synapses for lowpower stochastic neuromorphic systems: Auditory (cochlea) and visual (retina) cognitive processing applications,” in Electron Devices Meeting (IEDM), 2012 IEEE International. IEEE, 2012, pp. 10–3.
 [300] H. Wu, Z. Xu, S. Hu, Q. Yu, and Y. Liu, “Circuit implementation of spike time dependent plasticity (stdp) for artificial synapse,” in Electron Devices and Solid State Circuit (EDSSC), 2012 IEEE International Conference on. IEEE, 2012, pp. 1–2.
 [301] Y. Chen, S. Hall, L. McDaid, O. Buiu, and P. Kelly, “A silicon synapse based on a charge transfer device for spiking neural network application,” in Advances in Neural NetworksISNN 2006. Springer, 2006, pp. 1366–1373.
 [302] ——, “On the design of a low power compact spiking neuron cell based on chargecoupled synapses,” in Neural Networks, 2006. IJCNN’06. International Joint Conference on. IEEE, 2006, pp. 1511–1517.
 [303] Y. Chen, L. McDaid, S. Hall, and P. Kelly, “A programmable facilitating synapse device,” in Neural Networks, 2008. IJCNN 2008.(IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on. IEEE, 2008, pp. 1615–1620.
 [304] A. Ghani, L. J. McDaid, A. Belatreche, P. Kelly, S. Hall, T. Dowrick, S. Huang, J. Marsland, and A. Smith, “Evaluating the training dynamics of a cmos based synapse,” in Neural Networks (IJCNN), The 2011 International Joint Conference on. IEEE, 2011, pp. 1162–1168.
 [305] A. Ghani, L. McDaid, A. Belatreche, S. Hall, S. Huang, J. Marsland, T. Dowrick, and A. Smith, “Evaluating the generalisation capability of a cmos based synapse,” Neurocomputing, vol. 83, pp. 188–197, 2012.
 [306] C. Bartolozzi and G. Indiveri, “Silicon synaptic homeostasis,” Brain Inspired Cognitive Systems 2006, pp. 1–4, 2006.
 [307] S.C. Liu and B. A. Minch, “Silicon synaptic adaptation mechanisms for homeostasis and contrast gain control,” Neural Networks, IEEE Transactions on, vol. 13, no. 6, pp. 1497–1503, 2002.
 [308] U. Çilingiroğlu, “Capacitive synapses for microelectronic neural networks,” in Circuits and Systems, 1990., IEEE International Symposium on. IEEE, 1990, pp. 2982–2985.
 [309] H. Chible, “Analog circuit for synapse neural networks vlsi implementation,” in Electronics, Circuits and Systems, 2000. ICECS 2000. The 7th IEEE International Conference on, vol. 2. IEEE, 2000, pp. 1004–1007.
 [310] C. Diorio, P. Hasler, B. A. Minch, and C. A. Mead, “A singletransistor silicon synapse,” Electron Devices, IEEE Transactions on, vol. 43, no. 11, pp. 1972–1980, 1996.
 [311] E. J. Fuller, F. E. Gabaly, F. Léonard, S. Agarwal, S. J. Plimpton, R. B. JacobsGedrim, C. D. James, M. J. Marinella, and A. A. Talin, “Liion synaptic transistor for low power analog computing,” Advanced Materials, 2016.
 [312] P. Hasler, C. Diorio, B. A. Minch, and C. Mead, “Single transistor learning synapses,” Advances in neural information processing systems, pp. 817–826, 1995.
 [313] S. Kim, Y.C. Shin, N. C. Bogineni, and R. Sridhar, “A programmable analog cmos synapse for neural networks,” Analog Integrated Circuits and Signal Processing, vol. 2, no. 4, pp. 345–352, 1992.
 [314] T. McGinnity, B. Roche, L. Maguire, and L. McDaid, “Novel architecture and synapse design for hardware implementations of neural networks,” Computers & electrical engineering, vol. 24, no. 1, pp. 75–87, 1998.
 [315] S. Yu, “Orientation classification by a winnertakeall network with oxide rram based synaptic devices,” in Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. IEEE, 2014, pp. 1058–1061.
 [316] E. Vianello, D. Garbin, O. Bichler, G. Piccolboni, G. Molas, B. De Salvo, and L. Perniola, “Multiple binary oxrams as synapses for convolutional neural networks,” in Advances in Neuromorphic Hardware Exploiting Emerging Nanoscale Devices. Springer, 2017, pp. 109–127.
 [317] H. Card and W. Moore, “Implementation of plasticity in mos synapses,” in Artificial Neural Networks, 1989., First IEE International Conference on (Conf. Publ. No. 313). IET, 1989, pp. 33–36.
 [318] H. Card, C. Schneider, and W. Moore, “Hebbian plasticity in mos synapses,” in IEE Proceedings F (Radar and Signal Processing), vol. 138, no. 1. IET, 1991, pp. 13–16.
 [319] V. Srinivasan, J. Dugger, and P. Hasler, “An adaptive analog synapse circuit that implements the leastmeansquare learning rule,” in Circuits and Systems, 2005. ISCAS 2005. IEEE International Symposium on. IEEE, 2005, pp. 4441–4444.
 [320] J. Choi, B. J. Sheu, and J.F. Chang, “A gaussian synapse circuit for analog vlsi neural networks,” Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, vol. 2, no. 1, pp. 129–133, 1994.
 [321] K. Lau and S. Lee, “A programmable cmos gaussian synapse for analogue vlsi neural networks,” International journal of electronics, vol. 83, no. 1, pp. 91–98, 1997.
 [322] S. Lee and K. Lau, “An analog gaussian synapse for artificial neural networks,” in Circuits and Systems, 1995., Proceedings., Proceedings of the 38th Midwest Symposium on, vol. 1. IEEE, 1995, pp. 77–80.
 [323] M. Annunziato, D. Badoni, S. Fusi, and A. Salamon, “Analog vlsi implementation of a spike driven stochastic dynamical synapse,” in ICANN 98. Springer, 1998, pp. 475–480.
 [324] C. Bartolozzi and G. Indiveri, “A neuromorphic selective attention architecture with dynamic synapses and integrateandfire neurons,” Proceedings of Brain Inspired Cognitive Systems (BICS 2004), pp. 1–6, 2004.
 [325] C. Bartolozzi, O. Nikolayeva, and G. Indiveri, “Implementing homeostatic plasticity in vlsi networks of spiking neurons,” in Electronics, Circuits and Systems, 2008. ICECS 2008. 15th IEEE International Conference on. IEEE, 2008, pp. 682–685.
 [326] J. Binas, G. Indiveri, and M. Pfeiffer, “Spiking analog vlsi neuron assemblies as constraint satisfaction problem solvers,” in Circuits and Systems (ISCAS), 2016 IEEE International Symposium on. IEEE, 2016, pp. 2094–2097.
 [327] L. Carota, “Dynamics of vlsi analog decoupled neurons,” Neurocomputing, vol. 82, pp. 234–237, 2012.
 [328] E. Chicca and S. Fusi, “Stochastic synaptic plasticity in deterministic avlsi networks of spiking neurons,” ser. Proceedings of the World Congress on Neuroinformatics, F. Rattay, Ed. ARGESIM/ASIM Verlag, 2001, pp. 468–477. [Online]. Available: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.12.2064
 [329] E. Chicca, G. Indiveri, and R. Douglas, “An adaptive silicon synapse,” vol. 1, 2003, pp. I–81–I–84. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1205505
 [330] C.H. Chien, S.C. Liu, and A. Steimer, “A neuromorphic vlsi circuit for spikebased random sampling,” IEEE Transactions on Emerging Topics in Computing, 2015.
 [331] J. Fieres, J. Schemmel, and K. Meier, “Realizing biological spiking network models in a configurable waferscale hardware system,” in Neural Networks, 2008. IJCNN 2008.(IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on. IEEE, 2008, pp. 969–976.
 [332] S. Fusi and M. Mattia, “Collective behavior of networks with linear (vlsi) integrateandfire neurons,” Neural Computation, vol. 11, no. 3, pp. 633–652, 1999.
 [333] V. S. Ghaderi, D. Song, J. Choma, and T. W. Berger, “Nonlinear cognitive signal processing in ultralowpower programmable analog hardware,” Circuits and Systems II: Express Briefs, IEEE Transactions on, vol. 62, no. 2, pp. 124–128, 2015.
 [334] M. Giulioni, P. Camilleri, M. Mattia, V. Dante, J. Braun, and P. Del Giudice, “Robust working memory in an asynchronously spiking neural network realized with neuromorphic vlsi,” Frontiers in neuroscience, vol. 5, 2011.
 [335] M. A. Glover, A. Hamilton, and L. S. Smith, “An analog vlsi integrateandfire neural network for sound segmentation.” in NC, 1998, pp. 86–92.
 [336] M. Glover, A. Hamilton, and L. S. Smith, “Using analogue vlsi leaky integrateandfire neurons in a sound analysis system,” in Microelectronics for Neural, Fuzzy and BioInspired Systems, 1999. MicroNeuro’99. Proceedings of the Seventh International Conference on. IEEE, 1999, pp. 90–95.
 [337] ——, “Analogue vlsi leaky integrateandfire neurons and their use in a sound analysis system,” Analog Integrated Circuits and Signal Processing, vol. 30, no. 2, pp. 91–100, 2002.
 [338] D. H. Goldberg, G. Cauwenberghs, and A. G. Andreou, “Analog vlsi spiking neural network with address domain probabilistic synapses,” in Circuits and Systems, 2001. ISCAS 2001. The 2001 IEEE International Symposium on, vol. 3. IEEE, 2001, pp. 241–244.
 [339] ——, “Probabilistic synaptic weighting in a reconfigurable network of vlsi integrateandfire neurons,” Neural Networks, vol. 14, no. 6, pp. 781–793, 2001.
 [340] F. Grassia, T. Lévi, J. Tomas, S. Renaud, and S. Saïghi, “A neuromimetic spiking neural network for simulating cortical circuits,” in Information Sciences and Systems (CISS), 2011 45th Annual Conference on. IEEE, 2011, pp. 1–6.
 [341] P. Häfliger, M. Mahowald, and L. Watts, “A spike based learning neuron in analog vlsi,” in Advances in neural information processing systems, 1997, pp. 692–698.
 [342] D. Hajtáš, D. Ďuračková, and G. BenyonTinker, “Switched capacitorbased integrateandfire neural network,” in The State of the Art in Computational Intelligence. Springer, 2000, pp. 50–55.
 [343] J. Huo and A. Murray, “The role of membrane threshold and rate in stdp silicon neuron circuit simulation,” in Artificial Neural Networks: Formal Models and Their Applications–ICANN 2005. Springer, 2005, pp. 1009–1014.
 [344] G. Indiveri, “Modeling selective attention using a neuromorphic analog vlsi device,” Neural computation, vol. 12, no. 12, pp. 2857–2880, 2000.
 [345] ——, “A 2d neuromorphic vlsi architecture for modeling selective attention,” in Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEEINNSENNS International Joint Conference on, vol. 4. IEEE, 2000, pp. 208–213.
 [346] ——, “A neuromorphic vlsi device for implementing 2d selective attention systems,” Neural Networks, IEEE Transactions on, vol. 12, no. 6, pp. 1455–1463, 2001.
 [347] ——, “Neuromorphic bistable vlsi synapses with spiketimingdependent plasticity,” in NIPS, 2002, pp. 1091–1098.
 [348] ——, “Synaptic plasticity and spikebased computation in vlsi networks of integrateandfire neurons,” Neural Information ProcessingLetters and Reviews, vol. 11, no. 461, pp. 135–146, 2007.
 [349] A. Joubert, B. Belhadj, and R. Héliot, “A robust and compact 65 nm lif analog neuron for computational purposes,” in New Circuits and Systems Conference (NEWCAS), 2011 IEEE 9th International. IEEE, 2011, pp. 9–12.
 [350] A. Joubert, B. Belhadj, O. Temam, and R. Heliot, “Hardware spiking neurons design: Analog or digital?” in Neural Networks (IJCNN), The 2012 International Joint Conference on. IEEE, 2012, pp. 1–5.
 [351] Y. Kanazawa, T. Asai, M. Ikebe, and Y. Amemiya, “A novel cmos circuit for depressing synapse and its application to contrastinvariant pattern classification and synchrony detection,” International Journal of Robotics and Automation, vol. 19, no. 4, pp. 206–212, 2004.
 [352] T. J. Koickal, L. C. Gouveia, and A. Hamilton, “A programmable time event coded circuit block for reconfigurable neuromorphic computing,” in Computational and Ambient Intelligence. Springer, 2007, pp. 430–437.
 [353] ——, “A programmable spiketiming based circuit block for reconfigurable neuromorphic computing,” Neurocomputing, vol. 72, no. 16, pp. 3609–3616, 2009.
 [354] J. Lazzaro, “Lowpower silicon spiking neurons and axons,” in Circuits and Systems, 1992. ISCAS’92. Proceedings., 1992 IEEE International Symposium on, vol. 5. IEEE, 1992, pp. 2220–2223.
 [355] P. H. M. Mahowald and L. Watts, “A spike based learning neuron in analog vlsi,” Advances in Neural Information Processing Systems, vol. 9, p. 692, 1997.
 [356] R. Mill, S. Sheik, G. Indiveri, and S. L. Denham, “A model of stimulusspecific adaptation in neuromorphic analog vlsi,” Biomedical Circuits and Systems, IEEE Transactions on, vol. 5, no. 5, pp. 413–419, 2011.
 [357] S. Mitra, S. Fusi, and G. Indiveri, “A vlsi spikedriven dynamic synapse which learns only when necessary,” in Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on. IEEE, 2006, pp. 4–pp.
 [358] S. Nease, S. Brink, and P. Hasler, “Stdpenabled learning on a reconfigurable neuromorphic platform,” in Circuit Theory and Design (ECCTD), 2013 European Conference on. IEEE, 2013, pp. 1–4.
 [359] E. Neftci, J. Binas, U. Rutishauser, E. Chicca, G. Indiveri, and R. J. Douglas, “Synthesizing cognition in neuromorphic electronic systems,” Proceedings of the National Academy of Sciences, vol. 110, no. 37, pp. E3468–E3476, 2013.
 [360] M. Oster, A. M. Whatley, S.C. Liu, and R. J. Douglas, “A hardware/software framework for realtime spiking systems,” in Artificial Neural Networks: Biological Inspirations–ICANN 2005. Springer, 2005, pp. 161–166.
 [361] F. PerezPeña, A. LinaresBarranco, and E. Chicca, “An approach to motor control for spikebased neuromorphic robotics,” in Biomedical Circuits and Systems Conference (BioCAS), 2014 IEEE. IEEE, 2014, pp. 528–531.
 [362] T. Pfeil, A.C. Scherzer, J. Schemmel, and K. Meier, “Neuromorphic learning towards nano second precision,” in Neural Networks (IJCNN), The 2013 International Joint Conference on. IEEE, 2013, pp. 1–5.
 [363] T. Pfeil, J. Jordan, T. Tetzlaff, A. Grübl, J. Schemmel, M. Diesmann, and K. Meier, “Effect of heterogeneity on decorrelation mechanisms in spiking neural networks: A neuromorphichardware study,” Physical Review X, vol. 6, no. 2, p. 021023, 2016.
 [364] D. Querlioz and V. Trauchessec, “Stochastic resonance in an analog currentmode neuromorphic circuit,” in Circuits and Systems (ISCAS), 2013 IEEE International Symposium on. IEEE, 2013, pp. 1596–1599.
 [365] S. Renaud, J. Tomas, Y. Bornat, A. Daouzli, and S. Saïghi, “Neuromimetic ics with analog cores: an alternative for simulating spiking neural networks,” in Circuits and Systems, 2007. ISCAS 2007. IEEE International Symposium on. IEEE, 2007, pp. 3355–3358.
 [366] H. K. Riis and P. Hafliger, “Spike based learning with weak multilevel static memory,” in Circuits and Systems, 2004. ISCAS’04. Proceedings of the 2004 International Symposium on, vol. 5. IEEE, 2004, pp. V–393.
 [367] J. Rodrigues de OliveiraNeto, F. Duque Belfort, R. CavalcantiNeto, and J. Ranhel, “Magnitude comparison in analog spiking neural assemblies,” in Neural Networks (IJCNN), 2014 International Joint Conference on. IEEE, 2014, pp. 3186–3191.
 [368] S. Saïghi, T. Levi, B. Belhadj, O. Malot, and J. Tomas, “Hardware system for biologically realistic, plastic, and realtime spiking neural network simulations,” in Neural Networks (IJCNN), The 2010 International Joint Conference on. IEEE, 2010, pp. 1–7.
 [369] J. Schemmel, K. Meier, and E. Mueller, “A new vlsi model of neural microcircuits including spike time dependent plasticity,” in Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on, vol. 3. IEEE, 2004, pp. 1711–1716.
 [370] J. Schemmel, D. Bruderle, K. Meier, and B. Ostendorf, “Modeling synaptic plasticity within networks of highly accelerated i&f neurons,” in Circuits and Systems, 2007. ISCAS 2007. IEEE International Symposium on. IEEE, 2007, pp. 3367–3370.
 [371] J. Schemmel, J. Fieres, and K. Meier, “Waferscale integration of analog neural networks,” in Neural Networks, 2008. IJCNN 2008.(IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on. IEEE, 2008, pp. 431–438.
 [372] J. Schemmel, A. Grubl, S. Hartmann, A. Kononov, C. Mayr, K. Meier, S. Millner, J. Partzsch, S. Schiefer, S. Scholze et al., “Live demonstration: A scaleddown version of the brainscales waferscale neuromorphic system,” in Circuits and Systems (ISCAS), 2012 IEEE International Symposium on. IEEE, 2012, pp. 702–702.
 [373] J. Schreiter, U. Ramacher, A. Heittmann, D. Matolin, and R. Schüffny, “Analog implementation for networks of integrateandfire neurons with adaptive local connectivity,” in Proceedings of the 2002 12th IEEE Workshop on Neural Networks for Signal Processing. Citeseer, 2002, pp. 657–666.
 [374] L. S. Smith and A. Hamilton, Neuromorphic systems: engineering silicon from neurobiology. World Scientific, 1998, vol. 10.
 [375] H. Tanaka, T. Morie, and K. Aihara, “A cmos spiking neural network circuit with symmetric/asymmetric stdp function,” IEICE transactions on fundamentals of electronics, communications and computer sciences, vol. 92, no. 7, pp. 1690–1698, 2009.
 [376] G. M. Tovar, T. Hirose, T. Asai, and Y. Amemiya, “Preciselytimed synchronization among spiking neural circuits on analog vlsis,” in Proc. the 2006 RISP International Workshop on Nonlinear Circuits amd Signal Processing. Citeseer, 2006, pp. 62–65.
 [377] G. M. Tovar, E. S. Fukuda, T. Asai, T. Hirose, and Y. Amemiya, “Analog cmos circuits implementing neural segmentation model based on symmetric stdp learning,” in Neural Information Processing. Springer, 2008, pp. 117–126.
 [378] A. Utagawa, T. HIROSE, and Y. AMEMIYA, “An inhibitory neuralnetwork circuit exhibiting noise shaping with subthreshold mos neuron circuits,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. 90, no. 10, pp. 2108–2115, 2007.
 [379] A. van Schaik, “Building blocks for electronic spiking neural networks,” Neural networks, vol. 14, no. 6, pp. 617–628, 2001.
 [380] R. J. Vogelstein, F. Tenore, R. Philipp, M. S. Adlerstein, D. H. Goldberg, and G. Cauwenberghs, “Spike timingdependent plasticity in the address domain,” in Advances in Neural Information Processing Systems, 2002, pp. 1147–1154.
 [381] Y. Wang and S.C. Liu, “Programmable synaptic weights for an avlsi network of spiking neurons,” in Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on. IEEE, 2006, pp. 4–pp.
 [382] ——, “Input evoked nonlinearities in silicon dendritic circuits,” in Circuits and Systems, 2009. ISCAS 2009. IEEE International Symposium on. IEEE, 2009, pp. 2894–2897.
 [383] ——, “Motion detection using an avlsi network of spiking neurons.” in ISCAS, 2010, pp. 93–96.
 [384] J. H. Wijekoon and P. Dudek, “Heterogeneous neurons and plastic synapses in a reconfigurable cortical neural network ic,” in Circuits and Systems (ISCAS), 2012 IEEE International Symposium on. IEEE, 2012, pp. 2417–2420.
 [385] Z. Yang and A. F. Murray, “A biologically plausible neuromorphic system for object recognition and depth analysis.” in ESANN, 2004, pp. 157–162.
 [386] L. Zhang, Q. Lai, and Y. Chen, “Configurable neural phase shifter with spiketimingdependent plasticity,” Electron Device Letters, IEEE, vol. 31, no. 7, pp. 716–718, 2010.
 [387] C. Zhao, J. Li, L. Liu, L. S. Koutha, J. Liu, and Y. Yi, “Novel spike based reservoir node design with high performance spike delay loop,” in Proceedings of the 3rd ACM International Conference on Nanoscale Computing and Communication. ACM, 2016, p. 14.
 [388] K. Ahmed, A. Shrestha, Y. Wang, and Q. Qiu, “System design for inhardware stdp learning and spiking based probablistic inference,” in VLSI (ISVLSI), 2016 IEEE Computer Society Annual Symposium on. IEEE, 2016, pp. 272–277.
 [389] M. Ambroise, T. Levi, Y. Bornat, and S. Saighi, “Biorealistic spiking neural network on fpga,” in Information Sciences and Systems (CISS), 2013 47th Annual Conference on. IEEE, 2013, pp. 1–6.
 [390] A. Banerjee, S. Kar, S. Roy, A. Bhaduri, and A. Basu, “A currentmode spiking neural classifier with lumped dendritic nonlinearity,” in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on. IEEE, 2015, pp. 714–717.
 [391] M. E. Dean and C. Daffron, “A vlsi design for neuromorphic computing,” in VLSI (ISVLSI), 2016 IEEE Computer Society Annual Symposium on. IEEE, 2016, pp. 87–92.
 [392] J. Georgiou, A. G. Andreou, and P. O. Pouliquen, “A mixed analog/digital asynchronous processor for cortical computations in 3d soicmos,” in Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on. IEEE, 2006, pp. 4–pp.
 [393] D. Hu, X. Zhang, Z. Xu, S. Ferrari, and P. Mazumder, “Digital implementation of a spiking neural network (snn) capable of spiketimingdependent plasticity (stdp) learning,” in Nanotechnology (IEEENANO), 2014 IEEE 14th International Conference on. IEEE, 2014, pp. 873–876.
 [394] N. Imam, K. Wecker, J. Tse, R. Karmazin, and R. Manohar, “Neural spiking dynamics in asynchronous digital circuits,” in Neural Networks (IJCNN), The 2013 International Joint Conference on. IEEE, 2013, pp. 1–8.
 [395] J. K. Kim, P. Knag, T. Chen, and Z. Zhang, “A 640m pixel/s 3.65 mw sparse eventdriven neuromorphic object recognition processor with onchip learning,” in VLSI Circuits (VLSI Circuits), 2015 Symposium on. IEEE, 2015, pp. C50–C51.
 [396] S. R. Kulkarni and B. Rajendran, “Scalable digital cmos architecture for spike based supervised learning,” in Engineering Applications of Neural Networks. Springer, 2015, pp. 149–158.
 [397] A. Nere, U. Olcese, D. Balduzzi, and G. Tononi, “A neuromorphic architecture for object recognition and motion anticipation using burststdp,” PloS one, vol. 7, no. 5, p. e36958, 2012.
 [398] A. Nere, A. Hashmi, M. Lipasti, and G. Tononi, “Bridging the semantic gap: Emulating biological neuronal behaviors with simple digital neurons,” in High Performance Computer Architecture (HPCA2013), 2013 IEEE 19th International Symposium on. IEEE, 2013, pp. 472–483.
 [399] D. Roclin, O. Bichler, C. Gamrat, S. J. Thorpe, and J.O. Klein, “Design study of efficient digital orderbased stdp neuron implementations for extracting temporal features,” in Neural Networks (IJCNN), The 2013 International Joint Conference on. IEEE, 2013, pp. 1–7.
 [400] T. Schoenauer, S. Atasoy, N. Mehrtash, and H. Klar, “Simulation of a digital neurochip for spiking neural networks,” in Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEEINNSENNS International Joint Conference on, vol. 4. IEEE, 2000, pp. 490–495.
 [401] ——, “Neuropipechip: A digital neuroprocessor for spiking neural networks,” Neural Networks, IEEE Transactions on, vol. 13, no. 1, pp. 205–213, 2002.
 [402] J.s. Seo and M. Seok, “Digital cmos neuromorphic processor design featuring unsupervised online learning,” in Very Large Scale Integration (VLSISoC), 2015 IFIP/IEEE International Conference on. IEEE, 2015, pp. 49–51.
 [403] J. Shen, D. Ma, Z. Gu, M. Zhang, X. Zhu, X. Xu, Q. Xu, Y. Shen, and G. Pan, “Darwin: a neuromorphic hardware coprocessor based on spiking neural networks,” Science China Information Sciences, pp. 1–5, 2016.
 [404] B. Zhang, Z. Jiang, Q. Wang, J.s. Seo, and M. Seok, “A neuromorphic neural spike clustering processor for deepbrain sensing and stimulation systems,” in Low Power Electronics and Design (ISLPED), 2015 IEEE/ACM International Symposium on. IEEE, 2015, pp. 91–97.
 [405] B. Abinaya and S. Sophia, “An event based cmos quad bilateral combination with asynchronous sram architecture based neural network using low power,” in Electronics and Communication Systems (ICECS), 2015 2nd International Conference on. IEEE, 2015, pp. 995–999.
 [406] A. Afifi, A. Ayatollahi, and F. Raissi, “Cmol implementation of spiking neurons and spiketiming dependent plasticity,” International Journal of Circuit Theory and Applications, vol. 39, no. 4, pp. 357–372, 2011.
 [407] J. Arthur and K. Boahen, “Learning in silicon: timing is everything,” Advances in neural information processing systems, vol. 18, p. 75, 2006.
 [408] J. V. Arthur and K. A. Boahen, “Synchrony in silicon: The gamma rhythm,” Neural Networks, IEEE Transactions on, vol. 18, no. 6, pp. 1815–1825, 2007.
 [409] T. Asai, “A neuromorphic cmos family and its application,” in International Congress Series, vol. 1269. Elsevier, 2004, pp. 173–176.
 [410] M. R. Azghadi, S. Moradi, and G. Indiveri, “Programmable neuromorphic circuits for spikebased neural dynamics,” in New Circuits and Systems Conference (NEWCAS), 2013 IEEE 11th International. IEEE, 2013, pp. 1–4.
 [411] M. R. Azghadi, S. Moradi, D. B. Fasnacht, M. S. Ozdas, and G. Indiveri, “Programmable spiketimingdependent plasticity learning circuits in neuromorphic vlsi architectures,” ACM Journal on Emerging Technologies in Computing Systems (JETC), vol. 12, no. 2, p. 17, 2015.
 [412] F. Boi, T. Moraitis, V. De Feo, F. Diotalevi, C. Bartolozzi, G. Indiveri, and A. Vato, “A bidirectional brainmachine interface featuring a neuromorphic hardware decoder,” Frontiers in Neuroscience, vol. 10, 2016.
 [413] D. Bruderle, J. Bill, B. Kaplan, J. Kremkow, K. Meier, E. Muller, and J. Schemmel, “Simulatorlike exploration of cortical network architectures with a mixedsignal vlsi system,” in Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on. IEEE, 2010, pp. 2784–8787.
 [414] D. Brüderle, M. A. Petrovici, B. Vogginger, M. Ehrlich, T. Pfeil, S. Millner, A. Grübl, K. Wendt, E. Müller, M.O. Schwartz et al., “A comprehensive workflow for generalpurpose neural modeling with highly configurable neuromorphic hardware systems,” Biological cybernetics, vol. 104, no. 45, pp. 263–296, 2011.
 [415] E. Chicca, G. Indiveri, and R. J. Douglas, “An eventbased vlsi network of integrateandfire neurons,” in Circuits and Systems, 2004. ISCAS’04. Proceedings of the 2004 International Symposium on, vol. 5. IEEE, 2004, pp. V–357.
 [416] E. Chicca, P. Lichtsteiner, T. Delbruck, G. Indiveri, and R. J. Douglas, “Modeling orientation selectivity using a neuromorphic multichip system,” in Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on. IEEE, 2006, pp. 4–pp.
 [417] E. Chicca, G. Indiveri, and R. J. Douglas, “Context dependent amplification of both rate and eventcorrelation in a vlsi network of spiking neurons,” in Advances in Neural Information Processing Systems 19: Proceedings of the 2006 Conference, vol. 19. Mit Press, 2007, p. 257.
 [418] S. Choudhary, S. Sloan, S. Fok, A. Neckar, E. Trautmann, P. Gao, T. Stewart, C. Eliasmith, and K. Boahen, “Silicon neurons that compute,” in Artificial neural networks and machine learning–ICANN 2012. Springer, 2012, pp. 121–128.
 [419] D. Corneil, D. Sonnleithner, E. Neftci, E. Chicca, M. Cook, G. Indiveri, and R. Douglas, “Realtime inference in a vlsi spiking neural network,” in Circuits and Systems (ISCAS), 2012 IEEE International Symposium on. IEEE, 2012, pp. 2425–2428.
 [420] ——, “Function approximation with uncertainty propagation in a vlsi spiking neural network,” in Neural Networks (IJCNN), The 2012 International Joint Conference on. IEEE, 2012, pp. 1–7.
 [421] F. Corradi, C. Eliasmith, and G. Indiveri, “Mapping arbitrary mathematical functions and dynamical systems to neuromorphic vlsi circuits for spikebased neural computation,” in Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. IEEE, 2014, pp. 269–272.
 [422] F. Corradi, H. You, M. Giulioni, and G. Indiveri, “Decision making and perceptual bistability in spikebased neuromorphic vlsi systems,” in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on. IEEE, 2015, pp. 2708–2711.
 [423] J. CruzAlbrecht, M. Yung, and N. Srinivasa, “Energyefficient neuron, synapse and stdp integrated circuits,” Biomedical Circuits and Systems, IEEE Transactions on, vol. 6, no. 3, pp. 246–256, 2012.
 [424] F. Folowosele, F. Tenore, A. Russell, G. Orchard, M. Vismer, J. Tapson, and R. E. Cummings, “Implementing a neuromorphic crosscorrelation engine with silicon neurons,” in Circuits and Systems, 2008. ISCAS 2008. IEEE International Symposium on. IEEE, 2008, pp. 2162–2165.
 [425] S. Friedmann, N. Frémaux, J. Schemmel, W. Gerstner, and K. Meier, “Rewardbased learning under hardware constraints?using a risc processor embedded in a neuromorphic substrate,” Frontiers in neuroscience, vol. 7, 2013.
 [426] C. Gao and D. Hammerstrom, “Cortical models onto cmol and cmos?architectures and performance/price,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 54, no. 11, pp. 2502–2515, 2007.
 [427] C. Gao, M. S. Zaveri, and D. W. Hammerstrom, “Cmos/cmol architectures for spiking cortical column,” in IJCNN, 2008, pp. 2441–2448.
 [428] D. Hajtas and D. Durackova, “The library of building blocks for an” integrate & fire” neural network on a chip,” in Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on, vol. 4. IEEE, 2004, pp. 2631–2636.
 [429] D. Hammerstrom and M. S. Zaveri, “Prospects for building cortexscale cmol/cmos circuits: a design space exploration,” in NORCHIP, 2009. IEEE, 2009, pp. 1–8.
 [430] I. S. Han, “Mixedsignal neuronsynapse implementation for largescale neural network,” Neurocomputing, vol. 69, no. 16, pp. 1860–1867, 2006.
 [431] ——, “A pulsebased neural hardware implementation based on the controlled conductance by mosfet circuit,” in Neural Networks, 2006. IJCNN’06. International Joint Conference on. IEEE, 2006, pp. 2793–2799.
 [432] J. Huo, “A dynamic excitatoryinhibitory network in a vlsi chip for spiking information reregistration.” in NIPS, 2012.
 [433] S. Hussain, A. Basu, M. Wang, and T. J. Hamilton, “Deltron: Neuromorphic architectures for delay based learning,” in Circuits and Systems (APCCAS), 2012 IEEE Asia Pacific Conference on. IEEE, 2012, pp. 304–307.
 [434] G. Indiveri, T. Horiuchi, E. Niebur, and R. Douglas, “A competitive network of spiking vlsi neurons,” in World Congress on Neuroinformatics. Vienna, Austria: ARGESIM/ASIM Verlag, 2001, pp. 443–455.
 [435] G. Indiveri, E. Chicca, and R. Douglas, “A vlsi reconfigurable network of integrateandfire neurons with spikebased learning synapses,” 2004, pp. 405–410. [Online]. Available: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.7558
 [436] ——, “A vlsi array of lowpower spiking neurons and bistable synapses with spiketiming dependent plasticity,” Neural Networks, IEEE Transactions on, vol. 17, no. 1, pp. 211–221, 2006.
 [437] G. Indiveri and E. Chicca, “A vlsi neuromorphic device for implementing spikebased neural networks,” 2011.
 [438] Y. Kim, Y. Zhang, and P. Li, “An energy efficient approximate adder with carry skip for error resilient neuromorphic vlsi systems,” in Proceedings of the International Conference on ComputerAided Design. IEEE Press, 2013, pp. 130–137.
 [439] B. LinaresBarranco, T. SerranoGotarredona, and R. SerranoGotarredona, “Compact lowpower calibration minidacs for neural arrays with programmable weights,” Neural Networks, IEEE Transactions on, vol. 14, no. 5, pp. 1207–1216, 2003.
 [440] S.C. Liu and R. Douglas, “Temporal coding in a silicon network of integrateandfire neurons,” Neural Networks, IEEE Transactions on, vol. 15, no. 5, pp. 1305–1314, 2004.
 [441] C. G. Mayr, J. Partzsch, M. Noack, and R. Schüffny, “Configurable analogdigital conversion using the neural engineering framework,” Frontiers in neuroscience, vol. 8, 2014.
 [442] C. Mayr, J. Partzsch, M. Noack, S. Hänzsche, S. Scholze, S. Höppner, G. Ellguth, and R. Schüffny, “A biologicalrealtime neuromorphic system in 28 nm cmos using lowleakage switched capacitor circuits,” IEEE transactions on biomedical circuits and systems, vol. 10, no. 1, pp. 243–254, 2016.
 [443] L. McDaid, J. Harkin, S. Hall, T. Dowrick, Y. Chen, and J. Marsland, “Embrace: emulating biologicallyinspired architectures on hardware,” in NN?08: Proceedings of the 9th WSEAS International Conference on Neural Networks, 2008, pp. 167–172.
 [444] K. Meier, “A mixedsignal universal neuromorphic computing system,” in 2015 IEEE International Electron Devices Meeting (IEDM). IEEE, 2015, pp. 4–6.
 [445] K. Minkovich, N. Srinivasa, J. M. CruzAlbrecht, Y. Cho, and A. Nogin, “Programming timemultiplexed reconfigurable hardware using a scalable neuromorphic compiler,” Neural Networks and Learning Systems, IEEE Transactions on, vol. 23, no. 6, pp. 889–901, 2012.
 [446] S. Mitra, G. Indiveri, and S. Fusi, “Learning to classify complex patterns using a vlsi network of spiking neurons,” in Advances in Neural Information Processing Systems, 2007, pp. 1009–1016.
 [447] ——, “Robust classification of correlated patterns with a neuromorphic vlsi network of spiking neurons,” in Biomedical Circuits and Systems Conference, 2007. BIOCAS 2007. IEEE. IEEE, 2007, pp. 87–90.
 [448] S. Mitra, S. Fusi, and G. Indiveri, “Realtime classification of complex patterns using spikebased learning in neuromorphic vlsi,” Biomedical Circuits and Systems, IEEE Transactions on, vol. 3, no. 1, pp. 32–42, 2009.
 [449] S. Mitra and G. Indiveri, “Spikebased synaptic plasticity and classification on vlsi,” The Neuromorphic Engineer a publication of ineweb. org, vol. 10, no. 1200904.1636, 2009.
 [450] S. Moradi and G. Indiveri, “A vlsi network of spiking neurons with an asynchronous static random access memory,” in Biomedical Circuits and Systems Conference (BioCAS), 2011 IEEE. IEEE, 2011, pp. 277–280.
 [451] ——, “An eventbased neural network architecture with an asynchronous programmable synaptic memory,” Biomedical Circuits and Systems, IEEE Transactions on, vol. 8, no. 1, pp. 98–107, 2014.
 [452] H. Mostafa, F. Corradi, F. Stefanini, and G. Indiveri, “A hybrid analog/digital spiketiming dependent plasticity learning circuit for neuromorphic vlsi multineuron architectures,” in Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. IEEE, 2014, pp. 854–857.
 [453] E. Neftci, E. Chicca, G. Indiveri, J.J. E. Slotine, and R. J. Douglas, “Contraction properties of vlsi cooperative competitive neural networks of spiking neurons.” in NIPS, 2007.
 [454] E. Neftci, E. Chicca, G. Indiveri, and R. Douglas, “A systematic method for configuring vlsi networks of spiking neurons,” Neural computation, vol. 23, no. 10, pp. 2457–2497, 2011.
 [455] E. Neftci, J. Binas, E. Chicca, G. Indiveri, and R. Douglas, “Systematic construction of finite state automata using vlsi spiking neurons,” Biomimetic and Biohybrid Systems, pp. 382–383, 2012.
 [456] M. Noack, J. Partzsch, C. G. Mayr, S. Hänzsche, S. Scholze, S. Höppner, G. Ellguth, and R. Schüffny, “Switchedcapacitor realization of presynaptic shorttermplasticity and stoplearning synapses in 28 nm cmos,” Frontiers in neuroscience, vol. 9, 2015.
 [457] G. Palma, M. Suri, D. Querlioz, E. Vianello, and B. De Salvo, “Stochastic neuron design using conductive bridge ram,” in Nanoscale Architectures (NANOARCH), 2013 IEEE/ACM International Symposium on. IEEE, 2013, pp. 95–100.
 [458] M. A. Petrovici, B. Vogginger, P. Müller, O. Breitwieser, M. Lundqvist, L. Muller, M. Ehrlich, A. Destexhe, A. Lansner, R. Schüffny et al., “Characterization and compensation of networklevel anomalies in mixedsignal neuromorphic modeling platforms,” 2014.
 [459] T. Pfeil, A. Grübl, S. Jeltsch, E. Müller, P. Müller, M. A. Petrovici, M. Schmuker, D. Brüderle, J. Schemmel, and K. Meier, “Six networks on a universal neuromorphic computing substrate,” Frontiers in neuroscience, vol. 7, 2013.
 [460] N. Qiao, H. Mostafa, F. Corradi, M. Osswald, F. Stefanini, D. Sumislawska, and G. Indiveri, “A reconfigurable online learning spiking neuromorphic processor comprising 256 neurons and 128k synapses,” Frontiers in neuroscience, vol. 9, 2015.
 [461] S. Renaud, J. Tomas, N. Lewis, Y. Bornat, A. Daouzli, M. Rudolph, A. Destexhe, and S. Saïghi, “Pax: A mixed hardware/software simulation platform for spiking neural networks,” Neural Networks, vol. 23, no. 7, pp. 905–916, 2010.
 [462] J. Schemmel, A. Grubl, K. Meier, and E. Mueller, “Implementing synaptic plasticity in a vlsi spiking neural network model,” in Neural Networks, 2006. IJCNN’06. International Joint Conference on. IEEE, 2006, pp. 1–6.
 [463] M. Schmuker, T. Pfeil, and M. P. Nawrot, “A neuromorphic network for generic multivariate data classification,” Proceedings of the National Academy of Sciences, vol. 111, no. 6, pp. 2081–2086, 2014.
 [464] L. Shi, J. Pei, N. Deng, D. Wang, L. Deng, Y. Wang, Y. Zhang, F. Chen, M. Zhao, S. Song et al., “Development of a neuromorphic computing system,” in 2015 IEEE International Electron Devices Meeting (IEDM). IEEE, 2015, pp. 4–3.
 [465] S. Sinha, J. Suh, B. Bakkaloglu, and Y. Cao, “Workloadaware neuromorphic design of lowpower supply voltage controller,” in Proceedings of the 16th ACM/IEEE international symposium on Low power electronics and design. ACM, 2010, pp. 241–246.
 [466] ——, “A workloadaware neuromorphic controller for dynamic power and thermal management,” in Adaptive Hardware and Systems (AHS), 2011 NASA/ESA Conference on. IEEE, 2011, pp. 200–207.
 [467] J. Tapson and A. Van Schaik, “An asynchronous parallel neuromorphic adc architecture,” in Circuits and Systems (ISCAS), 2012 IEEE International Symposium on. IEEE, 2012, pp. 2409–2412.
 [468] R. J. Vogelstein, U. Mallik, and G. Cauwenberghs, “Silicon spikebased synaptic array and addressevent transceiver,” in Circuits and Systems, 2004. ISCAS’04. Proceedings of the 2004 International Symposium on, vol. 5. IEEE, 2004, pp. V–385.
 [469] R. J. Vogelstein, U. Mallik, E. Culurciello, G. Cauwenberghs, and R. EtienneCummings, “Saliencydriven image acuity modulation on a reconfigurable array of spiking silicon neurons,” in Advances in neural information processing systems, 2004, pp. 1457–1464.
 [470] R. J. Vogelstein, U. Mallik, J. T. Vogelstein, and G. Cauwenberghs, “Dynamically reconfigurable silicon array of spiking neurons with conductancebased synapses,” Neural Networks, IEEE Transactions on, vol. 18, no. 1, pp. 253–265, 2007.
 [471] Y. Wang, R. J. Douglas, and S.C. Liu, “Attentional processing on a spikebased vlsi neural network,” in Advances in Neural Information Processing Systems, 2006, pp. 1473–1480.
 [472] H.P. Wang, E. Chicca, G. Indiveri, and T. J. Sejnowski, “Reliable computation in noisy backgrounds using realtime neuromorphic hardware,” in Biomedical Circuits and Systems Conference, 2007. BIOCAS 2007. IEEE. IEEE, 2007, pp. 71–74.
 [473] R. M. Wang, T. J. Hamilton, J. C. Tapson, and A. van Schaik, “A mixedsignal implementation of a polychronous spiking neural network with delay adaptation,” Frontiers in neuroscience, vol. 8, 2014.
 [474] R. Wang, T. J. Hamilton, J. Tapson, and A. van Schaik, “A compact reconfigurable mixedsignal implementation of synaptic plasticity in spiking neurons,” in Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. IEEE, 2014, pp. 862–865.
 [475] Q. Wang, Y. Kim, and P. Li, “Architectural design exploration for neuromorphic processors with memristive synapses,” in Nanotechnology (IEEENANO), 2014 IEEE 14th International Conference on. IEEE, 2014, pp. 962–966.
 [476] W. Wang, Z. You, P. Liu, and J. Kuang, “An adaptive neural network a/d converter based on cmos/memristor hybrid design,” IEICE Electronics Express, 2014.
 [477] R. M. Wang, T. J. Hamilton, J. C. Tapson, and A. van Schaik, “A neuromorphic implementation of multiple spiketiming synaptic plasticity rules for largescale neural networks,” Frontiers in Neuroscience, vol. 9, p. 180, 2015.
 [478] Q. Wang, Y. Kim, and P. Li, “Neuromorphic processors with memristive synapses: Synaptic interface and architectural exploration,” ACM Journal on Emerging Technologies in Computing Systems (JETC), vol. 12, no. 4, p. 35, 2016.
 [479] Y. Xu, C. S. Thakur, T. J. Hamilton, J. Tapson, R. Wang, and A. van Schaik, “A reconfigurable mixedsignal implementation of a neuromorphic adc,” in Biomedical Circuits and Systems Conference (BioCAS), 2015 IEEE. IEEE, 2015, pp. 1–4.
 [480] T. Yu, J. Park, S. Joshi, C. Maier, and G. Cauwenberghs, “65kneuron integrateandfire array transceiver with addressevent reconfigurable synaptic routing,” in Biomedical Circuits and Systems Conference (BioCAS), 2012 IEEE. IEEE, 2012, pp. 21–24.
 [481] N. Dahasert, İ. Öztürk, and R. Kiliç, “Implementation of izhikevich neuron model with field programmable devices,” in Signal Processing and Communications Applications Conference (SIU), 2012 20th. IEEE, 2012, pp. 1–4.
 [482] E. Farquhar, C. Gordon, and P. Hasler, “A field programmable neural array,” in Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on. IEEE, 2006, pp. 4–pp.
 [483] B. Marr and J. Hasler, “Compiling probabilistic, bioinspired circuits on a field programmable analog array,” Neuromorphic Engineering Systems and Applications, p. 88, 2015.

[484]
B. McGinley, P. Rocke, F. Morgan, and J. Maher, “Reconfigurable analogue
hardware evolution of adaptive spiking neural network controllers,” in
Proceedings of the 10th annual conference on Genetic and evolutionary computation
. ACM, 2008, pp. 289–290.  [485] P. Rocke, J. Maher, and F. Morgan, “Platform for intrinsic evolution of analogue neural networks,” in Reconfigurable Computing and FPGAs, 2005. ReConFig 2005. International Conference on. IEEE, 2005, pp. 8–pp.
 [486] P. Rocke, B. McGinley, F. Morgan, and J. Maher, “Reconfigurable hardware evolution platform for a spiking neural network robotics controller,” in Reconfigurable computing: Architectures, tools and applications. Springer, 2007, pp. 373–378.
 [487] P. Rocke, B. McGinley, J. Maher, F. Morgan, and J. Harkin, “Investigating the suitability of fpaas for evolved hardware spiking neural networks,” in Evolvable Systems: From Biology to Hardware. Springer, 2008, pp. 118–129.
 [488] J. Zhao and Y.B. Kim, “Circuit implementation of fitzhugh–nagumo neuron model using field programmable analog arrays,” in Circuits and Systems (MWSCAS?2007), 50th Midwest Symposium, 2007, pp. 772–775.
 [489] R. Agis, E. Ros, J. Diaz, R. Carrillo, and E. Ortigosa, “Hardware eventdriven simulation engine for spiking neural networks,” International journal of electronics, vol. 94, no. 5, pp. 469–480, 2007.
 [490] L.C. Caron, M. D?Haene, F. Mailhot, B. Schrauwen, and J. Rouat, “Event management for large scale eventdriven digital hardware spiking neural networks,” Neural Networks, vol. 45, pp. 83–93, 2013.
 [491] D. Neil and S.C. Liu, “Minitaur, an eventdriven fpgabased spiking network accelerator,” Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, vol. 22, no. 12, pp. 2621–2628, 2014.
 [492] J. B. Ahn, “Extension of neuron machine neurocomputing architecture for spiking neural networks,” in Neural Networks (IJCNN), The 2013 International Joint Conference on. IEEE, 2013, pp. 1–8.
 [493] B. Ahn, “Neuronlike digital hardware architecture for largescale neuromorphic computing,” in Neural Networks (IJCNN), 2015 International Joint Conference on. IEEE, 2015, pp. 1–8.
 [494] ——, “Specialpurpose hardware architecture for neuromorphic computing,” in 2015 International SoC Design Conference (ISOCC). IEEE, 2015, pp. 209–210.
 [495] L. Bako, “Realtime clustering of datasets with hardware embedded neuromorphic neural networks,” in High Performance Computational Systems Biology, 2009. HIBI’09. International Workshop on. IEEE, 2009, pp. 13–22.
 [496] L. Bakó, S. T. Brassai, L. F. Márton, and L. Losonczi, “Evolving advanced neural networks on runtime reconfigurable digital hardware platform,” in Proceedings of the 3rd International Workshop on Adaptive SelfTuning Computing Systems. ACM, 2013, p. 3.
 [497] B. Belhadj, J. Tomas, Y. Bornat, A. Daouzli, O. Malot, and S. Renaud, “Digital mapping of a realistic spike timing plasticity model for realtime neural simulations,” in Proceedings of the XXIV conference on design of circuits and integrated systems, 2009, pp. 1–6.
 [498] S. Bellis, K. M. Razeeb, C. Saha, K. Delaney, C. O’Mathuna, A. PoundsCornish, G. de Souza, M. Colley, H. Hagras, G. Clarke et al., “Fpga implementation of spiking neural networksan initial step towards building tangible collaborative autonomous agents,” in FieldProgrammable Technology, 2004. Proceedings. 2004 IEEE International Conference on. IEEE, 2004, pp. 449–452.
 [499] M. Bhuiyan, A. Nallamuthu, M. C. Smith, V. K. Pallipuram et al., “Optimization and performance study of largescale biological networks for reconfigurable computing,” in HighPerformance Reconfigurable Computing Technology and Applications (HPRCTA), 2010 Fourth International Workshop on. IEEE, 2010, pp. 1–9.
 [500] H. T. Blair, J. Cong, and D. Wu, “Fpga simulation engine for customized construction of neural microcircuits,” in ComputerAided Design (ICCAD), 2013 IEEE/ACM International Conference on. IEEE, 2013, pp. 607–614.

[501]
L.C. Caron, F. Mailhot, and J. Rouat, “Fpga implementation of a spiking neural network for pattern matching,” in
Circuits and Systems (ISCAS), 2011 IEEE International Symposium on. IEEE, 2011, pp. 649–652.  [502] A. Cassidy, S. Denham, P. Kanold, and A. Andreou, “Fpga based silicon spiking neural array,” in Biomedical Circuits and Systems Conference, 2007. BIOCAS 2007. IEEE. IEEE, 2007, pp. 75–78.
 [503] A. Cassidy and A. G. Andreou, “Dynamical digital silicon neurons,” in Biomedical Circuits and Systems Conference, 2008. BioCAS 2008. IEEE. IEEE, 2008, pp. 289–292.
 [504] A. Cassidy, A. G. Andreou, and J. Georgiou, “Design of a one million neuron single fpga neuromorphic system for realtime multimodal scene analysis,” in Information Sciences and Systems (CISS), 2011 45th Annual Conference on. IEEE, 2011, pp. 1–6.
 [505] B. Chappet de Vangel, C. TorresHuitzil, and B. Girau, “Spiking dynamic neural fields architectures on fpga,” in ReConFigurable Computing and FPGAs (ReConFig), 2014 International Conference on. IEEE, 2014, pp. 1–6.
 [506] K. Cheung, S. R. Schultz, and W. Luk, “A largescale spiking neural network accelerator for fpga systems,” in Artificial Neural Networks and Machine Learning–ICANN 2012. Springer, 2012, pp. 113–120.
 [507] ——, “Neuroflow: A general purpose spiking neural network simulation platform using customizable processors,” Frontiers in Neuroscience, vol. 9, p. 516, 2015.
 [508] J. Cong, H. T. Blair, and D. Wu, “Fpga simulation engine for customized construction of neural microcircuit,” in FieldProgrammable Custom Computing Machines (FCCM), 2013 IEEE 21st Annual International Symposium on. IEEE, 2013, pp. 229–229.
 [509] C. Daffron, J. Chan, A. Disney, L. Bechtel, R. Wagner, M. E. Dean, G. S. Rose, J. S. Plank, J. D. Birdwell, and C. D. Schuman, “Extensions and enhancements for the danna neuromorphic architecture,” in SoutheastCon, 2016. IEEE, 2016, pp. 1–4.
 [510] B. C. de Vangel, C. TorresHuitzil, and B. Girau, “Event based visual attention with dynamic neural field on fpga,” in Proceedings of the 10th International Conference on Distributed Smart Camera. ACM, 2016, pp. 142–147.
 [511] M. E. Dean, C. D. Schuman, and J. D. Birdwell, “Dynamic adaptive neural network array,” in International Conference on Unconventional Computation and Natural Computation. Springer, 2014, pp. 129–141.
 [512] M. E. Dean, J. Chan, C. Daffron, A. Disney, J. Reynolds, G. Rose, J. S. Plank, J. D. Birdwell, and C. D. Schuman, “An application development platform for neuromorphic computing,” in Neural Networks (IJCNN), 2016 International Joint Conference on. IEEE, 2016, pp. 1347–1354.
 [513] C. Diaz, G. Sanchez, G. Duchen, M. Nakano, and H. Perez, “An efficient hardware implementation of a novel unary spiking neural network multiplier with variable dendritic delays,” Neurocomputing, 2016.
 [514] E. Z. Farsa, S. Nazari, and M. Gholami, “Function approximation by hardware spiking neural network,” Journal of Computational Electronics, vol. 14, no. 3, pp. 707–716, 2015.
 [515] P. J. Fox and S. W. Moore, “Efficient handling of synaptic updates in fpgabased largescale neural network simulations,” in Workshop on Neural Engineering using Reconfigurable Hardware, vol. 2012, 2012.
 [516] V. Garg, R. Shekhar, and J. G. Harris, “The time machine: A novel spikebased computation architecture,” in Circuits and Systems (ISCAS), 2011 IEEE International Symposium on. IEEE, 2011, pp. 685–688.
 [517] A. Ghani, T. M. McGinnity, L. P. Maguire, and J. Harkin, “Area efficient architecture for large scale implementation of biologically plausible spiking neural networks on reconfigurable hardware,” in Field Programmable Logic and Applications, 2006. FPL’06. International Conference on. IEEE, 2006, pp. 1–2.
 [518] B. Glackin, T. M. McGinnity, L. P. Maguire, Q. Wu, and A. Belatreche, “A novel approach for the implementation of large scale spiking neural networks on fpga hardware,” in Computational Intelligence and Bioinspired Systems. Springer, 2005, pp. 552–563.
 [519] B. Glackin, J. Harkin, T. M. McGinnity, and L. P. Maguire, “A hardware accelerated simulation environment for spiking neural networks,” in Reconfigurable Computing: Architectures, Tools and Applications. Springer, 2009, pp. 336–341.
 [520] B. Glackin, J. Harkin, T. M. McGinnity, L. P. Maguire, and Q. Wu, “Emulating spiking neural networks for edge detection on fpga hardware,” in Field Programmable Logic and Applications, 2009. FPL 2009. International Conference on. IEEE, 2009, pp. 670–673.
 [521] S. Gomar and A. Ahmadi, “Digital multiplierless implementation of biological adaptiveexponential neuron model,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 61, no. 4, pp. 1206–1219, 2014.
 [522] S. Gomar, M. Mirhassani, M. Ahmadi, and M. Seif, “A digital neuromorphic circuit for neuralglial interaction,” in Neural Networks (IJCNN), 2016 International Joint Conference on. IEEE, 2016, pp. 213–218.
 [523] P. Hafliger, “Asynchronous event redirecting in bioinspired communication,” in Electronics, Circuits and Systems, 2001. ICECS 2001. The 8th IEEE International Conference on, vol. 1. IEEE, 2001, pp. 87–90.
 [524] J. Harkin, F. Morgan, S. Hall, P. Dudek, T. Dowrick, and L. McDaid, “Reconfigurable platforms and the challenges for largescale implementations of spiking neural networks,” in Field Programmable Logic and Applications, 2008. FPL 2008. International Conference on. IEEE, 2008, pp. 483–486.
 [525] J. Harkin, F. Morgan, L. McDaid, S. Hall, B. McGinley, and S. Cawley, “A reconfigurable and biologically inspired paradigm for computation using networkonchip and spiking neural networks,” International Journal of Reconfigurable Computing, vol. 2009, p. 2, 2009.
 [526] H. H. Hellmich and H. Klar, “An fpga based simulation acceleration platform for spiking neural networks,” in Circuits and Systems, 2004. MWSCAS’04. The 2004 47th Midwest Symposium on, vol. 2. IEEE, 2004, pp. II–389.
 [527] ——, “See: a concept for an fpga based emulation engine for spiking neurons with adaptive weights,” in 5th WSEAS Int. Conf. Neural Networks Applications (NNA?04), 2004, pp. 930–935.
 [528] T. Iakymchuk, A. Rosado, J. V. Frances, and M. Batallre, “Fast spiking neural network architecture for lowcost fpga devices,” in Reconfigurable Communicationcentric SystemsonChip (ReCoSoC), 2012 7th International Workshop on. IEEE, 2012, pp. 1–6.
 [529] T. Iakymchuk, A. RosadoMunoz, M. BatallerMompean, J. GuerreroMartinez, J. FrancesVillora, M. Wegrzyn, and M. Adamski, “Hardwareaccelerated spike train generation for neuromorphic image and video processing,” in Programmable Logic (SPL), 2014 IX Southern Conference on. IEEE, 2014, pp. 1–6.
 [530] S. Johnston, G. Prasad, L. Maguire, and M. McGinnity, “Comparative investigation into classical and spiking neuron implementations on fpgas,” in Artificial Neural Networks: Biological Inspirations–ICANN 2005. Springer, 2005, pp. 269–274.
 [531] D. Just, J. F. Chaves, R. M. Gomes, and H. E. Borges, “An efficient implementation of a realistic spiking neuron model on an fpga.” in IJCCI (ICFCICNC), 2010, pp. 344–349.
 [532] S. Koziol, S. Brink, and J. Hasler, “A neuromorphic approach to path planning using a reconfigurable neuron array ic,” Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, vol. 22, no. 12, pp. 2724–2737, 2014.
 [533] J. Li, Y. Katori, and T. Kohno, “An fpgabased silicon neuronal network with selectable excitability silicon neurons,” Frontiers in neuroscience, vol. 6, 2012.
 [534] W. X. Li, R. C. Cheung, R. H. Chan, D. Song, and T. W. Berger, “Realtime prediction of neuronal population spiking activity using fpga,” Biomedical Circuits and Systems, IEEE Transactions on, vol. 7, no. 4, pp. 489–498, 2013.
 [535] A. Makhlooghpour, H. Soleimani, A. Ahmadi, M. Zwolinski, and M. Saif, “High accuracy implementation of adaptive exponential integrated and fire neuron model,” in Neural Networks (IJCNN), 2016 International Joint Conference on. IEEE, 2016, pp. 192–197.
 [536] S. Maya, R. Reynoso, C. Torres, and M. AriasEstrada, “Compact spiking neural network implementation in fpga,” in FieldProgrammable Logic and Applications: The Roadmap to Reconfigurable Computing. Springer, 2000, pp. 270–276.
 [537] J. L. Molin, T. Figliolia, K. Sanni, I. Doxas, A. Andreou, and R. EtienneCummings, “Fpga emulation of a spikebased, stochastic system for realtime image dewarping,” in Circuits and Systems (MWSCAS), 2015 IEEE 58th International Midwest Symposium on. IEEE, 2015, pp. 1–4.
 [538] S. W. Moore, P. J. Fox, S. J. Marsh, A. Mujumdar et al., “Bluehivea fieldprogramable custom computing machine for extremescale realtime neural network simulation,” in FieldProgrammable Custom Computing Machines (FCCM), 2012 IEEE 20th Annual International Symposium on. IEEE, 2012, pp. 133–140.
 [539] C. M. Niu, S. Nandyala, W. J. Sohn, and T. Sanger, “Multiscale hypertime hardware emulation of human motor nervous system based on spiking neurons using fpga,” in Advances in Neural Information Processing Systems, 2012, pp. 37–45.
 [540] M. NuñoMaganda and C. TorresHuitzil, “A temporal coding hardware implementation for spiking neural networks,” ACM SIGARCH Computer Architecture News, vol. 38, no. 4, pp. 2–7, 2011.
 [541] M. A. NunoMaganda, M. AriasEstrada, C. TorresHuitzil, H. H. AvilesArriaga, Y. HernándezMier, and M. MoralesSandoval, “A hardware architecture for image clustering using spiking neural networks,” in VLSI (ISVLSI), 2012 IEEE Computer Society Annual Symposium on. IEEE, 2012, pp. 261–266.
 [542] M. Pearson, I. Gilhespy, K. Gurney, C. Melhuish, B. Mitchinson, M. Nibouche, and A. Pipe, “A realtime, fpga based, biologically plausible neural network processor,” in Artificial Neural Networks: Formal Models and Their Applications–ICANN 2005. Springer, 2005, pp. 1021–1026.
 [543] M. J. Pearson, C. Melhuish, A. G. Pipe, M. Nibouche, K. Gurney, B. Mitchinson et al., “Design and fpga implementation of an embedded realtime biologically plausible spiking neural network processor,” in Field programmable logic and applications, 2005. international conference on. IEEE, 2005, pp. 582–585.
 [544] M. J. Pearson, A. G. Pipe, B. Mitchinson, K. Gurney, C. Melhuish, I. Gilhespy, and M. Nibouche, “Implementing spiking neural networks for realtime signalprocessing and control applications: a modelvalidated fpga approach,” Neural Networks, IEEE Transactions on, vol. 18, no. 5, pp. 1472–1487, 2007.
 [545] K. L. Rice, M. A. Bhuiyan, T. M. Taha, C. N. Vutsinas, and M. C. Smith, “Fpga implementation of izhikevich spiking neural networks for character recognition,” in Reconfigurable Computing and FPGAs, 2009. ReConFig’09. International Conference on. IEEE, 2009, pp. 451–456.
 [546] A. RiosNavarro, J. DominguezMorales, R. TapiadorMorales, D. GutierrezGalan, A. JimenezFernandez, and A. LinaresBarranco, “A 20mevps/32mev eventbased usb framework for neuromorphic systems debugging,” in Eventbased Control, Communication, and Signal Processing (EBCCSP), 2016 Second International Conference on. IEEE, 2016, pp. 1–6.
 [547] D. Roggen, S. Hofmann, Y. Thoma, and D. Floreano, “Hardware spiking neural network with runtime reconfigurable connectivity in an autonomous robot,” in Evolvable hardware, 2003. proceedings. nasa/dod conference on. IEEE, 2003, pp. 189–198.
 [548] E. Ros, R. Agis, R. R. Carrillo, and E. M. Ortigosa, “Postsynaptic timedependent conductances in spiking neurons: Fpga implementation of a flexible cell model,” in Artificial Neural Nets Problem Solving Methods. Springer, 2003, pp. 145–152.
 [549] A. RosadoMuñoz, A. Fijałkowski, M. BatallerMompeán, and J. GuerreroMartínez, “Fpga implementation of spiking neural networks supported by a software design environment,” in Proceedings of 18th IFAC World Congress, 2011.
 [550] H. RostroGonzalez, B. Cessac, B. Girau, and C. TorresHuitzil, “The role of the asymptotic dynamics in the design of fpgabased hardware implementations of giftype neural networks,” Journal of PhysiologyParis, vol. 105, no. 1, pp. 91–97, 2011.
 [551] H. RostroGonzalez, G. Garreau, A. Andreou, J. Georgiou, J. H. BarronZambrano, and C. TorresHuitzil, “An fpgabased approach for parameter estimation in spiking neural networks,” in Circuits and Systems (ISCAS), 2012 IEEE International Symposium on. IEEE, 2012, pp. 2897–2900.
 [552] G. Sánchez, J. Madrenas, and J. M. Moreno, “Performance evaluation and scaling of a multiprocessor architecture emulating complex snn algorithms,” in Evolvable Systems: From Biology to Hardware. Springer, 2010, pp. 145–156.
 [553] B. Schrauwen and J. M. Van Campenhout, “Parallel hardware implementation of a broad class of spiking neurons using serial arithmetic.” in ESANN, vol. 2, no. 006, 2006.
 [554] H. Shayani, P. J. Bentley, and A. M. Tyrrell, “An fpgabased model suitable for evolution and development of spiking neural networks.” in ESANN, 2008, pp. 197–202.
 [555] H. Shayani, P. Bentley, and A. M. Tyrrell, “A cellular structure for online routing of digital spiking neuron axons and dendrites on fpgas,” in Evolvable Systems: From Biology to Hardware. Springer, 2008, pp. 273–284.
 [556] H. Shayani, P. J. Bentley, and A. M. Tyrrell, “A multicellular developmental representation for evolution of adaptive spiking neural microcircuits in an fpga,” in Adaptive Hardware and Systems, 2009. AHS 2009. NASA/ESA Conference on. IEEE, 2009, pp. 3–10.
 [557] S. Sheik, S. Paul, C. Augustine, C. Kothapalli, M. M. Khellah, G. Cauwenberghs, and E. Neftci, “Synaptic sampling in hardware spiking neural networks,” in Circuits and Systems (ISCAS), 2016 IEEE International Symposium on. IEEE, 2016, pp. 2090–2093.
 [558] R. J. Sofatzis, S. Afshar, and T. J. Hamilton, “Rotationally invariant vision recognition with neuromorphic transformation and learning networks,” in Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. IEEE, 2014, pp. 469–472.
 [559] H. Soleimani, A. Ahmadi, and M. Bavandpour, “Biologically inspired spiking neurons: Piecewise linear models and digital implementation,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 59, no. 12, pp. 2991–3004, 2012.
 [560] A. Upegui, C. A. PeñaReyes, and E. Sanchez, “An fpga platform for online topology exploration of spiking neural networks,” Microprocessors and microsystems, vol. 29, no. 5, pp. 211–223, 2005.
 [561] R. Wang, G. Cohen, K. M. Stiefel, T. J. Hamilton, J. Tapson, and A. van Schaik, “An fpga implementation of a polychronous spiking neural network with delay adaptation,” Frontiers in neuroscience, vol. 7, 2013.
 [562] R. Wang, T. J. Hamilton, J. Tapson, and A. van Schaik, “An fpga design framework for largescale spiking neural networks,” in Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. IEEE, 2014, pp. 457–460.
 [563] ——, “A compact neural core for digital implementation of the neural engineering framework,” in Biomedical Circuits and Systems Conference (BioCAS), 2014 IEEE. IEEE, 2014, pp. 548–551.
 [564] ——, “An fpga design framework for largescale spiking neural networks,” in Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. IEEE, 2014, pp. 457–460.
 [565] S. Wang, C. Ma, D. Wang, and J. Pei, “Extensible neuromorphic computing simulator based on a programmable hardware,” in 2015 15th NonVolatile Memory Technology Symposium (NVMTS). IEEE, 2015, pp. 1–3.
 [566] Q. X. Wu, X. Liao, X. Huang, R. Cai, J. Cai, and J. Liu, “Development of fpga toolbox for implementation of spiking neural networks,” in Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on. IEEE, 2015, pp. 806–810.
 [567] J. M. Xicotencatl and M. AriasEstrada, “Fpga based high density spiking neural network array,” in Field Programmable Logic and Application. Springer, 2003, pp. 1053–1056.
 [568] S. Yang, Q. Wu, and R. Li, “A case for spiking neural network simulation based on configurable multiplefpga systems,” Cognitive neurodynamics, vol. 5, no. 3, pp. 301–309, 2011.
 [569] S. Yang and T. M. McGinnity, “A biologically plausible realtime spiking neuron simulation environment based on a multiplefpga platform,” ACM SIGARCH Computer Architecture News, vol. 39, no. 4, pp. 78–81, 2011.
 [570] A. Zuppicich and S. Soltic, “Fpga implementation of an evolving spiking neural network,” in Advances in NeuroInformation Processing. Springer, 2009, pp. 1129–1136.
 [571] A. Bofilli Petit and A. F. Murray, “Synchrony detection and amplification by silicon neurons with stdp synapses,” Neural Networks, IEEE Transactions on, vol. 15, no. 5, pp. 1296–1304, 2004.
 [572] M. Giulioni, X. Lagorce, F. Galluppi, and R. B. Benosman, “Eventbased computation of motion flow on a neuromorphic analog neural platform,” Frontiers in Neuroscience, vol. 10, p. 35, 2016.
 [573] L. Bako, “Realtime classification of datasets with hardware embedded neuromorphic neural networks,” Briefings in bioinformatics, p. bbp066, 2010.
 [574] D. Martí, M. Rigotti, M. Seok, and S. Fusi, “Energyefficient neuromorphic classifiers,” Neural Computation, 2016.
 [575] M. A. NunoMaganda, M. AriasEstrada, C. TorresHuitzil, and B. Girau, “Hardware implementation of spiking neural network classifiers based on backpropagationbased learning algorithms,” in Neural Networks, 2009. IJCNN 2009. International Joint Conference on. IEEE, 2009, pp. 2294–2301.
 [576] D. Badoni, M. Giulioni, V. Dante, and P. Del Giudice, “An avlsi recurrent network of spiking neurons with reconfigurable and plastic synapses,” in Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on. IEEE, 2006, pp. 4–pp.
 [577] P. Camilleri, M. Giulioni, M. Mattia, J. Braun, and P. Del Giudice, “Selfsustained activity in attractor networks using neuromorphic vlsi,” in IJCNN, 2010, pp. 1–6.
 [578] E. Chicca, D. Badoni, V. Dante, M. D’Andreagiovanni, G. Salina, L. Carota, S. Fusi, and P. Del Giudice, “A vlsi recurrent network of integrateandfire neurons connected by plastic synapses with longterm memory,” Neural Networks, IEEE Transactions on, vol. 14, no. 5, pp. 1297–1307, 2003.
 [579] S. Fusi, P. Del Giudice, and D. J. Amit, “Neurophysiology of a vlsi spiking neural network: Lann21,” in Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEEINNSENNS International Joint Conference on, vol. 3. IEEE, 2000, pp. 121–126.
 [580] M. Giulioni, F. Corradi, V. Dante, and P. Del Giudice, “Real time unsupervised learning of visual stimuli in neuromorphic vlsi systems,” Scientific reports, vol. 5, 2015.
 [581] P. U. Diehl, G. Zarrella, A. Cassidy, B. U. Pedroni, and E. Neftci, “Conversion of artificial recurrent neural networks to spiking neural networks for lowpower neuromorphic hardware,” in Rebooting Computing (ICRC), IEEE International Conference on. IEEE, 2016, pp. 1–8.
 [582] D. S. Chevitarese and M. N. Dos Santos, “Realtime face tracking and recognition on ibm neuromorphic chip,” in 2016 IEEE International Symposium on Multimedia (ISM). IEEE, 2016, pp. 667–672.
 [583] P. U. Diehl, B. U. Pedroni, A. Cassidy, P. Merolla, E. Neftci, and G. Zarrella, “Truehappiness: Neuromorphic emotion recognition on truenorth,” in Neural Networks (IJCNN), 2016 International Joint Conference on. IEEE, 2016, pp. 4278–4285.
 [584] S. K. Esser, A. Andreopoulos, R. Appuswamy, P. Datta, D. Barch, A. Amir, J. Arthur, A. Cassidy, M. Flickner, P. Merolla et al., “Cognitive computing systems: Algorithms and applications for networks of neurosynaptic cores,” in Neural Networks (IJCNN), The 2013 International Joint Conference on. IEEE, 2013, pp. 1–10.
 [585] B. Han, A. Sengupta, and K. Roy, “On the energy benefits of spiking deep neural networks: A case study,” in Neural Networks (IJCNN), 2016 International Joint Conference on. IEEE, 2016, pp. 971–976.
 [586] G. Indiveri, F. Corradi, and N. Qiao, “Neuromorphic architectures for spiking deep neural networks,” in Electron Devices Meeting (IEDM), 2015 IEEE International. IEEE, 2015, pp. 4–2.

[587]
Y. Cao, Y. Chen, and D. Khosla, “Spiking deep convolutional neural networks
for energyefficient object recognition,”
International Journal of Computer Vision
, vol. 113, no. 1, pp. 54–66, 2015.  [588] E. CerezuelaEscudero, A. JimenezFernandez, R. PazVicente, M. DominguezMorales, A. LinaresBarranco, and G. JimenezMoreno, “Musical notes classification with neuromorphic auditory system using fpga and a convolutional spiking network,” in Neural Networks (IJCNN), 2015 International Joint Conference on. IEEE, 2015, pp. 1–7.
 [589] W. Murphy, M. Renz, and Q. Wu, “Binary image classification using a neurosynaptic processor: A tradeoff analysis,” in Circuits and Systems (ISCAS), 2016 IEEE International Symposium on. IEEE, 2016, pp. 1342–1345.
 [590] E. Nurse, B. S. Mashford, A. J. Yepes, I. KiralKornek, S. Harrer, and D. R. Freestone, “Decoding eeg and lfp signals using deep learning: heading truenorth,” in Proceedings of the ACM International Conference on Computing Frontiers. ACM, 2016, pp. 259–266.
 [591] W.Y. Tsai, D. Barch, A. Cassidy, M. Debole, A. Andreopoulos, B. Jackson, M. Flickner, J. Arthur, D. Modha, J. Sampson et al., “Alwayson speech recognition using truenorth, a reconfigurable, neurosynaptic processor,” IEEE Transactions on Computers, 2016.
 [592] S. K. Esser, P. A. Merolla, J. V. Arthur, A. S. Cassidy, R. Appuswamy, A. Andreopoulos, D. J. Berg, J. L. McKinstry, T. Melano, D. R. Barch et al., “Convolutional networks for fast, energyefficient neuromorphic computing,” Proceedings of the National Academy of Sciences, p. 201604850, 2016.
 [593] E. Stromatias, D. Neil, M. Pfeiffer, F. Galluppi, S. B. Furber, and S.C. Liu, “Robustness of spiking deep belief networks to noise and reduced bit precision of neuroinspired hardware platforms,” Frontiers in neuroscience, vol. 9, 2015.
 [594] A. Bofill, D. Thompson, and A. F. Murray, “Circuits for vlsi implementation of temporally asymmetric hebbian learning,” in Advances in Neural Information processing systems, 2001, pp. 1091–1098.
 [595] A. Bofilli Petit and A. F. Murray, “Learning temporal correlations in biologicallyinspired avlsi,” in Circuits and Systems, 2003. ISCAS’03. Proceedings of the 2003 International Symposium on, vol. 5. IEEE, 2003, pp. V–817.
 [596] P. Camilleri, M. Giulioni, V. Dante, D. Badoni, G. Indiveri, B. Michaelis, J. Braun, and P. Del Giudice, “A neuromorphic avlsi network chip with configurable plastic synapses,” in Hybrid Intelligent Systems, 2007. HIS 2007. 7th International Conference on. IEEE, 2007, pp. 296–301.
 [597] M. Giulioni, P. Camilleri, V. Dante, D. Badoni, G. Indiveri, J. Braun, and P. Del Giudice, “A vlsi network of spiking neurons with plastic fully configurable ?stoplearning? synapses,” in Electronics, Circuits and Systems, 2008. ICECS 2008. 15th IEEE International Conference on. IEEE, 2008, pp. 678–681.
 [598] M. Giulioni, M. Pannunzi, D. Badoni, V. Dante, and P. D. Giudice, “A configurable analog vlsi neural network with spiking neurons and selfregulating plastic synapses,” in Advances in Neural Information Processing Systems, 2008, pp. 545–552.
 [599] M. Giulioni, M. Pannunzi, D. Badoni, V. Dante, and P. Del Giudice, “Classification of correlated patterns with a configurable analog vlsi neural network of spiking neurons and selfregulating plastic synapses,” Neural computation, vol. 21, no. 11, pp. 3106–3129, 2009.
 [600] C. Gordon and P. Hasler, “Biological learning modeled in an adaptive floatinggate system,” in Circuits and Systems, 2002. ISCAS 2002. IEEE International Symposium on, vol. 5. IEEE, 2002, pp. V–609.
 [601] P. Häfliger and M. Mahowald, “Spike based normalizing hebbian learning in an analog vlsi artificial neuron,” Analog Integrated Circuits and Signal Processing, vol. 18, no. 23, pp. 133–139, 1999.
 [602] Q. Sun, F. Schwartz, J. Michel, Y. Herve, and R. Dalmolin, “Implementation study of an analog spiking neural network for assisting cardiac delay prediction in a cardiac resynchronization therapy device,” Neural Networks, IEEE Transactions on, vol. 22, no. 6, pp. 858–869, 2011.
 [603] F. L. M. Huayaney, H. Tanaka, T. Matsuo, T. Morie, and K. Aihara, “A vlsi spiking neural network with symmetric stdp and associative memory operation,” in Neural Information Processing. Springer, 2011, pp. 381–388.
 [604] S. Shapero, C. Rozell, and P. Hasler, “Configurable hardware integrate and fire neurons for sparse approximation,” Neural Networks, vol. 45, pp. 134–143, 2013.
 [605] C. H. Ang, C. Jin, P. H. Leong, and A. Van Schaik, “Spiking neural networkbased autoassociative memory using fpga interconnect delays,” in FieldProgrammable Technology (FPT), 2011 International Conference on. IEEE, 2011, pp. 1–4.
 [606] J. Dungen and J. Brault, “Simulated control of a tracking mobile robot by four avlsi integrateandfire neurons paired into maps,” in Neural Networks, 2005. IJCNN ’05. Proceedings. 2005 IEEE International Joint Conference on, vol. 2, July 2005, pp. 695–699 vol. 2.
 [607] P. Hafliger, “Adaptive wta with an analog vlsi neuromorphic learning chip,” Neural Networks, IEEE Transactions on, vol. 18, no. 2, pp. 551–572, 2007.
 [608] S.C. Liu and M. Oster, “Feature competition in a spikebased winnertakeall vlsi network,” in Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on. IEEE, 2006, pp. 4–pp.
 [609] M. Oster and S.C. Liu, “A winnertakeall spiking network with spiking inputs,” in Electronics, Circuits and Systems, 2004. ICECS 2004. Proceedings of the 2004 11th IEEE International Conference on. IEEE, 2004, pp. 203–206.
 [610] M. Oster, Y. Wang, R. Douglas, and S.C. Liu, “Quantification of a spikebased winnertakeall vlsi network,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 55, no. 10, pp. 3160–3169, 2008.
 [611] J. P. Abrahamsen, P. Hafliger, and T. S. Lande, “A time domain winnertakeall network of integrateandfire neurons,” in Circuits and Systems, 2004. ISCAS’04. Proceedings of the 2004 International Symposium on, vol. 5. IEEE, 2004, pp. V–361.
 [612] H.Y. Hsieh and K.T. Tang, “Hardware friendly probabilistic spiking neural network with longterm and shortterm plasticity,” Neural Networks and Learning Systems, IEEE Transactions on, vol. 24, no. 12, pp. 2063–2074, 2013.
 [613] ——, “An onchip learning, lowpower probabilistic spiking neural network with longterm memory,” in Biomedical Circuits and Systems Conference (BioCAS), 2013 IEEE. IEEE, 2013, pp. 5–8.
 [614] H. Abdelbaki, E. Gelenbe, and S. E. ElKhamy, “Analog hardware implementation of the random neural network model,” in Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEEINNSENNS International Joint Conference on, vol. 4. IEEE, 2000, pp. 197–201.
 [615] E. Donati, F. Corradi, C. Stefanini, and G. Indiveri, “A spiking implementation of the lamprey’s central pattern generator in neuromorphic vlsi,” in Biomedical Circuits and Systems Conference (BioCAS), 2014 IEEE. IEEE, 2014, pp. 512–515.
 [616] E. Donati, G. Indiveri, and C. Stefanini, “A novel spiking cpgbased implementation system to control a lamprey robot,” in Biomedical Robotics and Biomechatronics (BioRob), 2016 6th IEEE International Conference on. IEEE, 2016, pp. 1364–1364.
 [617] M. Ambroise, T. Levi, S. Joucla, B. Yvert, and S. Saïghi, “Realtime biomimetic central pattern generators in an fpga for hybrid experiments,” Neuromorphic Engineering Systems and Applications, p. 134, 2015.
 [618] J. H. BarronZambrano and C. TorresHuitzil, “Fpga implementation of a configurable neuromorphic cpgbased locomotion controller,” Neural Networks, vol. 45, pp. 50–61, 2013.
 [619] S. Joucla, M. Ambroise, T. Levi, T. Lafon, P. Chauvet, S. Saïghi, Y. Bornat, N. Lewis, S. Renaud, and B. Yvert, “Generation of locomotorlike activity in the isolated rat spinal cord using intraspinal electrical microstimulation driven by a digital neuromorphic cpg,” Frontiers in Neuroscience, vol. 10, 2016.
 [620] A. Abutalebi and S. Fakhraie, “A submicron analog neural network with an adjustablelevel output unit,” in Microelectronics, 1998. ICM’98. Proceedings of the Tenth International Conference on. IEEE, 1998, pp. 294–297.
 [621] D. Y. Aksin, P. B. Basyurt, and H. U. Uyanik, “Singleended input fourquadrant multiplier for analog neural networks,” in Circuit Theory and Design, 2009. ECCTD 2009. European Conference on. IEEE, 2009, pp. 307–310.
 [622] M. AlNsour and H. S. AbdelAtyZohdy, “Mos fully analog reinforcement neural network chip,” in Circuits and Systems, 2001. ISCAS 2001. The 2001 IEEE International Symposium on, vol. 3. IEEE, 2001, pp. 237–240.
 [623] B. A. Alhalabi and M. Bayoumi, “A scalable analog architecture for neural networks with onchip learning and refreshing,” in VLSI, 1995. Proceedings., Fifth Great Lakes Symposium on. IEEE, 1995, pp. 33–38.
 [624] A. Almeida and J. Franca, “Digitally programmable analog building blocks for the implementation of artificial neural networks,” Neural Networks, IEEE Transactions on, vol. 7, no. 2, pp. 506–514, 1996.
 [625] J. Alspector, R. Meir, B. Yuhas, A. Jayakumar, and D. Lippe, “A parallel gradient descent method for learning in analog vlsi neural networks,” in Advances in neural information processing systems, 1993, pp. 836–844.
 [626] J. Amaral, J. Amaral, C. Santini, R. Tanscheit, M. Vellasco, and M. Pacheco, “Towards evolvable analog artificial neural networks controllers,” in Evolvable Hardware, 2004. Proceedings. 2004 NASA/DoD Conference on. IEEE, 2004, pp. 46–52.
 [627] Y. Arima, M. Murasaki, T. Yamada, A. Maeda, and H. Shinohara, “A refreshable analog vlsi neural network chip with 400 neurons and 40 k synapses,” in SolidState Circuits Conference, 1992. Digest of Technical Papers. 39th ISSCC, 1992 IEEE International. IEEE, 1992, pp. 132–133.
 [628] I. Bayraktaroğlu, A. S. Öğrenci, G. Dündar, S. Balkır, and E. Alpaydın, “Annsys: an analog neural network synthesis system,” Neural networks, vol. 12, no. 2, pp. 325–338, 1999.
 [629] Y. Berg, R. L. Sigvartsen, T. S. Lande, and A. Abusland, “An analog feedforward neural network with onchip learning,” Analog Integrated Circuits and Signal Processing, vol. 9, no. 1, pp. 65–75, 1996.
 [630] S. Bibyk and M. Ismail, “Issues in analog vlsi and mos techniques for neural computing,” in Analog VLSI Implementation of Neural systems. Springer, 1989, pp. 103–133.
 [631] G. Bo, D. Caviglia, and M. Valle, “A current mode cmos multilayer perceptron chip,” in Microelectronics for Neural Networks, 1996., Proceedings of Fifth International Conference on. IEEE, 1996, pp. 103–106.
 [632] G. Bo, D. Caviglia, M. Valle, R. Stratta, and E. Trucco, “A reconfigurable analog vlsi neural network architecture with non linear synapses,” in Neural Nets WIRN VIETRI96. Springer, 1997, pp. 281–288.
 [633] G. Bo, D. Caviglia, H. Chiblé, and M. Valle, “A circuit architecture for analog onchip back propagation learning with local learning rate adaptation,” Analog Integrated Circuits and Signal Processing, vol. 18, no. 23, pp. 163–173, 1999.
 [634] G. Bollano, M. Costa, D. Palmisano, and E. Pasero, “Offchip training of analog hardware feedforward neural networks through hyperfloating resilient propagation,” in Neural Nets WIRN VIETRI96. Springer, 1997, pp. 289–297.
 [635] T. Borgstrom, M. Ismail, and S. Bibyk, “Programmable currentmode neural network for implementation in analogue mos vlsi,” IEE Proceedings G (Circuits, Devices and Systems), vol. 137, no. 2, pp. 175–184, 1990.
 [636] T.H. Botha, “An analog cmos programmable and configurable neural network,” in Pattern Recognition, 1992. Vol. IV. Conference D: Architectures for Vision and Pattern Recognition, Proceedings., 11th IAPR International Conference on. IEEE, 1992, pp. 222–224.
 [637] S. Bridges, M. Figueroa, D. Hsu, and C. Diorio, “A reconfigurable vlsi learning array,” in SolidState Circuits Conference, 2005. ESSCIRC 2005. Proceedings of the 31st European. IEEE, 2005, pp. 117–120.
 [638] V. Calayir, M. Darwish, J. Weldon, and L. Pileggi, “Analog neuromorphic computing enabled by multigate programmable resistive devices,” in Proceedings of the 2015 Design, Automation & Test in Europe Conference & Exhibition. EDA Consortium, 2015, pp. 928–931.
 [639] G. Carvajal, M. Figueroa, D. Sbarbaro, and W. Valenzuela, “Analysis and compensation of the effects of analog vlsi arithmetic on the lms algorithm,” Neural Networks, IEEE Transactions on, vol. 22, no. 7, pp. 1046–1060, 2011.
 [640] R. C. Chang, B. J. Sheu, J. Choi, and D. C.H. Chen, “Programmableweight building blocks for analog vlsi neural network processors,” Analog Integrated Circuits and Signal Processing, vol. 9, no. 3, pp. 215–230, 1996.
 [641] N. Chasta, S. Chouhan, and Y. Kumar, “Analog vlsi implementation of neural network architecture for signal processing,” International Journal of VLSI Design & Comunication System, vol. 3, no. 2, 2012.
 [642] J.W. Cho, “Modular neurochip with onchip learning and adjustable learning parameters,” Neural Processing Letters, vol. 4, no. 1, pp. 45–52, 1996.
 [643] J.W. Cho and S.Y. Lee, “Analog neurochips with onchip learning capability for adaptive nonlinear equalizers,” in Neural Networks Proceedings, 1998. IEEE World Congress on Computational Intelligence. The 1998 IEEE International Joint Conference on, vol. 1. IEEE, 1998, pp. 581–586.
 [644] ——, “Active noise canceling using analog neurochip with onchip learning capability,” in Advances in Neural Information Processing Systems, 1999, pp. 664–670.
 [645] M.R. Choi and F. M. Salam, “Implementation of feedforward artificial neural nets with learning using standard cmos vlsi technology,” in Circuits and Systems, 1991., IEEE International Sympoisum on. IEEE, 1991, pp. 1509–1512.
 [646] J. Choi and B. J. Sheu, “Vlsi design of compact and highprecision analog neural network processors,” in Neural Networks, 1992. IJCNN., International Joint Conference on, vol. 2. IEEE, 1992, pp. 637–641.
 [647] Y. K. Choi and S.Y. Lee, “Subthreshold mos implementation of neural networks with onchip error backpropagation learning,” in Neural Networks, 1993. IJCNN’93Nagoya. Proceedings of 1993 International Joint Conference on, vol. 1. IEEE, 1993, pp. 849–852.
 [648] J. Choi, S. H. Bang, and B. J. Sheu, “A programmable analog vlsi neural network processor for communication receivers,” Neural Networks, IEEE Transactions on, vol. 4, no. 3, pp. 484–495, 1993.
 [649] ——, “A programmable vlsi neural network processor for digital communications,” in Custom Integrated Circuits Conference, 1993., Proceedings of the IEEE 1993. IEEE, 1993, pp. 16–5.
 [650] Y. K. Choi, K.H. Ahn, and S.Y. Lee, “Effects of multiplier output offsets on onchip learning for analog neurochips,” Neural Processing Letters, vol. 4, no. 1, pp. 1–8, 1996.
 [651] R. Coggins and M. A. Jabri, “Wattle: A trainable gain analogue vlsi neural network,” in Advances in Neural Information Processing Systems, 1994, pp. 874–881.
 [652] R. Coggins, M. Jabri, B. Flower, and S. Pickard, “A hybrid analog and digital vlsi neural network for intracardiac morphology classification,” SolidState Circuits, IEEE Journal of, vol. 30, no. 5, pp. 542–550, 1995.
 [653] R. Coggins, M. A. Jabri, B. Flower, and S. Pickard, “Iceg morphology classification using an analogue vlsi neural network,” in Advances in Neural Information Processing Systems, 1995, pp. 731–738.
 [654] D. Coue and G. Wilson, “A fourquadrant subthreshold mode multiplier for analog neuralnetwork applications,” Neural Networks, IEEE Transactions on, vol. 7, no. 5, pp. 1212–1219, 1996.
 [655] F. Diotalevi, M. Valle, G. Bo, E. Biglieri, and D. Caviglia, “An analog onchip learning circuit architecture of the weight perturbation algorithm,” in Circuits and Systems, 2000. Proceedings. ISCAS 2000 Geneva. The 2000 IEEE International Symposium on, vol. 1. IEEE, 2000, pp. 419–422.
 [656] L. Docheva, A. Bekiarski, and I. Dochev, “Analysis of analog neural network model with cmos multipliers,” RADIOENGINEERINGPRAGUE, vol. 16, no. 3, p. 103, 2007.
 [657] B. K. Dolenko and H. C. Card, “The effects of analog hardware properties on backpropagation networks with onchip learning,” in Neural Networks, 1993., IEEE International Conference on. IEEE, 1993, pp. 110–115.
 [658] B. Dolenko and H. Card, “Neural learning in analogue hardware: Effects of component variation from fabrication and from noise,” Electronics letters, vol. 29, no. 8, pp. 693–694, 1993.
 [659] B. K. Dolenko and H. C. Card, “Tolerance to analog hardware of onchip learning in backpropagation networks,” Neural Networks, IEEE Transactions on, vol. 6, no. 5, pp. 1045–1052, 1995.
 [660] J. Donald and L. A. Akers, “An adaptive neural processing node,” Neural Networks, IEEE Transactions on, vol. 4, no. 3, pp. 413–426, 1993.
 [661] T. Duong, S. Eberhardt, M. Tran, T. Daud, and A. Thakoor, “Learning and optimization with cascaded vlsi neural network buildingblock chips,” in Neural Networks, 1992. IJCNN., International Joint Conference on, vol. 1. IEEE, 1992, pp. 184–189.
 [662] S. Eberhardt, T. Duong, and A. Thakoor, “Design of parallel hardware neural network systems from custom analog vlsi’building block’chips,” in Neural Networks, 1989. IJCNN., International Joint Conference on. IEEE, 1989, pp. 183–190.
 [663] S. P. Eberhardt, R. Tawel, T. X. Brown, T. Daud, and A. Thakoor, “Analog vlsi neural networks: Implementation issues and examples in optimization and supervised learning,” Industrial Electronics, IEEE Transactions on, vol. 39, no. 6, pp. 552–564, 1992.
 [664] E. ElMasry, H.K. Yang, M. Yakout et al., “Implementations of artificial neural networks using currentmode pulse width modulation technique,” Neural Networks, IEEE Transactions on, vol. 8, no. 3, pp. 532–548, 1997.
 [665] M. Elsoud, R. AbdelRassoul, H. H. Soliman, L. M. Elghanam et al., “Lowpower cmos circuits for analog vlsi programmable neural networks,” in Microelectronics, 2003. ICM 2003. Proceedings of the 15th International Conference on. IEEE, 2003, pp. 14–17.
 [666] S. M. Fakhraie, J. Xu, and K. Smith, “Design of cmos quadratic neural networks,” in Proc. IEEE Pacific Rim Conf. Communications, Computers, and Signal Processing, 1995, pp. 493–496.
 [667] D. B. Feltham and W. Maly, “Physically realistic fault models for analog cmos neural networks,” SolidState Circuits, IEEE Journal of, vol. 26, no. 9, pp. 1223–1229, 1991.
 [668] W. Fisher, R. Fujimoto, and M. Okamura, “The lockheed programmable analog neural network processor,” in Neural Networks, 1990., 1990 IJCNN International Joint Conference on. IEEE, 1990, pp. 563–568.
 [669] B. Foruzandeh and S. F. Quigley, “An analogue multilayer perceptron circuit with onchip training,” in Circuits and Systems, 1999. ISCAS’99. Proceedings of the 1999 IEEE International Symposium on, vol. 5. IEEE, 1999, pp. 395–398.
 [670] B. Foruzandeh and S. Quigley, “An investigation of the effect of synapse transfer characteristic on the performance of analogue neural networks,” in Electronics, Circuits and Systems, 1999. Proceedings of ICECS’99. The 6th IEEE International Conference on, vol. 2. IEEE, 1999, pp. 1017–1020.
 [671] B. Furman and A. Abidi, “An analog cmos backward errorpropagation lsi,” in Signals, Systems and Computers, 1988. TwentySecond Asilomar Conference on, vol. 2. IEEE, 1988, pp. 645–648.
 [672] L. Gatet, H. TapBéteille, and M. Lescure, “Design and test of a cmos mlp analog neural network for fast onboard signal processing,” in Electronics, Circuits and Systems, 2006. ICECS’06. 13th IEEE International Conference on. IEEE, 2006, pp. 922–925.
 [673] ——, “Realtime surface discrimination using an analog neural network implemented in a phaseshift laser rangefinder,” Sensors Journal, IEEE, vol. 7, no. 10, pp. 1381–1387, 2007.
 [674] ——, “Analog neural network implementation for a realtime surface classification application,” Sensors Journal, IEEE, vol. 8, no. 8, pp. 1413–1421, 2008.
 [675] L. Gatet, H. TapBéteille, M. Lescure, D. Roviras, and A. Mallet, “Functional tests of a 0.6 m cmos mlp analog neural network for fast onboard signal processing,” Analog Integrated Circuits and Signal Processing, vol. 54, no. 3, pp. 219–227, 2008.
 [676] L. Gatet, H. TapBéteille, D. Roviras, and F. Gizard, “Integrated cmos analog neural network ability to linearize the distorted characteristic of hpa embedded in satellites,” in Electronic Design, Test and Applications, 2008. DELTA 2008. 4th IEEE International Symposium on. IEEE, 2008, pp. 502–505.
 [677] L. Gatet, H. TapBéteille, and F. Bony, “Comparison between analog and digital neural network implementations for rangefinding applications,” Neural Networks, IEEE Transactions on, vol. 20, no. 3, pp. 460–470, 2009.
 [678] B. Heruseto, E. Prasetyo, H. Afandi, and M. Paindavoine, “Embedded analog cmos neural network inside high speed camera,” in Quality Electronic Design, 2009. ASQED 2009. 1st Asia Symposium on. IEEE, 2009, pp. 144–147.
 [679] K. Hirotsu and M. A. Brooke, “An analog neural network chip with random weight change learning algorithm,” in Neural Networks, 1993. IJCNN’93Nagoya. Proceedings of 1993 International Joint Conference on, vol. 3. IEEE, 1993, pp. 3031–3034.
 [680] S. G. Hohmann, J. Schemmel, F. Schürmann, and K. Meier, “Exploring the parameter space of a genetic algorithm for training an analog neural network.” in GECCO, 2002, pp. 375–382.
 [681] P. Houselander, J. Taylor, and D. Haigh, “A current mode analogue circuit for implementing artificial neural networks,” in Electronic Filters, IEE 1988 Saraga Colloquium on. IET, 1988, pp. 14–1.
 [682] M. Jabri and B. Flower, “Weight perturbation: An optimal architecture and learning technique for analog vlsi feedforward and recurrent multilayer networks,” Neural Networks, IEEE Transactions on, vol. 3, no. 1, pp. 154–157, 1992.
 [683] V. Kakkar, “Comparative study on analog and digital neural networks,” Int. J. Comput. Sci. Netw. Secur, vol. 9, no. 7, pp. 14–21, 2009.
 [684] J.F. Lan and C.Y. Wu, “Analog cmos currentmode implementation of the feedforward neural network with onchip learning and storage,” in Neural Networks, 1995. Proceedings., IEEE International Conference on, vol. 1. IEEE, 1995, pp. 645–650.
 [685] J. Lansner, T. Lehmann et al., “A neuronand a synapse chip for artificial neural networks,” in SolidState Circuits Conference, 1992. ESSCIRC’92. Eighteenth European. IEEE, 1992, pp. 213–216.
 [686] T. Lindblad, C. Lindsey, F. Block, and A. Jayakumar, “Using software and hardware neural networks in a higgs search,” Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 356, no. 2, pp. 498–506, 1995.
 [687] J. Liu and M. Brooke, “A fully parallel learning neural network chip for realtime control,” in Neural Networks, 1999. IJCNN’99. International Joint Conference on, vol. 4. IEEE, 1999, pp. 2323–2328.
 [688] ——, “Fully parallel onchip learning hardware neural network for realtime control,” in Circuits and Systems, 1999. ISCAS’99. Proceedings of the 1999 IEEE International Symposium on, vol. 5. IEEE, 1999, pp. 371–374.
 [689] J. B. Lont and W. Guggenbÿhl, “Analog cmos implementation of a multilayer perceptron with nonlinear synapses,” Neural Networks, IEEE Transactions on, vol. 3, no. 3, pp. 457–465, 1992.
 [690] J. B. Lont, “Highdensity analogeeprom based neural network,” in ICANN?93. Springer, 1993, pp. 1062–1065.
 [691] C. Lu, B. Shi, and L. Chen, “Hardware implementation of an onchip bp learning neural network with programmable neuron characteristics and learning rate adaptation,” in Neural Networks, 2001. Proceedings. IJCNN’01. International Joint Conference on, vol. 1. IEEE, 2001, pp. 212–215.
 [692] ——, “A programmable onchip bp learning neural network with enhanced neuron characteristics,” in Circuits and Systems, 2001. ISCAS 2001. The 2001 IEEE International Symposium on, vol. 3. IEEE, 2001, pp. 573–576.
 [693] C. Lu, B.X. Shi, and L. Chen, “Implementation of an analog selflearning neural network,” in ASIC, 2001. Proceedings. 4th International Conference on. IEEE, 2001, pp. 262–265.
 [694] ——, “An onchip bp learning neural network with ideal neuron characteristics and learning rate adaptation,” Analog Integrated Circuits and Signal Processing, vol. 31, no. 1, pp. 55–62, 2002.
 [695] C. Lu, B. Shi, and L. Chen, “Hardware implementation of an expandable onchip learning neural network with 8neuron and 64synapse,” in TENCON’02. Proceedings. 2002 IEEE Region 10 Conference on Computers, Communications, Control and Power Engineering, vol. 3. IEEE, 2002, pp. 1451–1454.
 [696] ——, “A generalpurpose neural network with onchip bp learning,” in Circuits and Systems, 2002. ISCAS 2002. IEEE International Symposium on, vol. 2. IEEE, 2002, pp. II–520.
 [697] ——, “An expandable onchip bp learning neural network chip,” International journal of electronics, vol. 90, no. 5, pp. 331–340, 2003.
 [698] Y. Maeda, H. Hirano, and Y. Kanata, “An analog neural network circuit with a learning rule via simultaneous perturbation,” in Neural Networks, 1993. IJCNN’93Nagoya. Proceedings of 1993 International Joint Conference on, vol. 1. IEEE, 1993, pp. 853–856.
 [699] ——, “A learning rule of neural networks via simultaneous perturbation and its hardware implementation,” Neural Networks, vol. 8, no. 2, pp. 251–259, 1995.
 [700] D. Maliuk, H.G. Stratigopoulos, and Y. Makris, “An analog vlsi multilayer perceptron and its application towards builtin selftest in analog circuits,” in OnLine Testing Symposium (IOLTS), 2010 IEEE 16th International. IEEE, 2010, pp. 71–76.
 [701] D. Maliuk, H.G. Stratigopoulos, H. Huang, and Y. Makris, “Analog neural network design for rf builtin selftest,” in Test Conference (ITC), 2010 IEEE International. IEEE, 2010, pp. 1–10.
 [702] D. Maliuk and Y. Makris, “A dualmode weight storage analog neural network platform for onchip applications,” in Circuits and Systems (ISCAS), 2012 IEEE International Symposium on. IEEE, 2012, pp. 2889–2892.
 [703] ——, “An analog nonvolatile neural network platform for prototyping rf bist solutions,” in Proceedings of the conference on Design, Automation & Test in Europe. European Design and Automation Association, 2014, p. 368.
 [704] ——, “An experimentation platform for onchip integration of analog neural networks: A pathway to trusted and robust analog/rf ics,” 2014.
 [705] P. Masa, K. Hoen, and H. Wallinga, “A highspeed analog neural processor,” Micro, IEEE, vol. 14, no. 3, pp. 40–50, 1994.
 [706] M. Masmoudi, M. Samet, F. Taktak, and A. M. Alimi, “A hardware implementation of neural network for the recognition of printed numerals,” in Microelectronics, 1999. ICM’99. The Eleventh International Conference on. IEEE, 1999, pp. 113–116.
 [707] M. Mestari, “An analog neural network implementation in fixed time of adjustableorder statistic filters and applications,” Neural Networks, IEEE Transactions on, vol. 15, no. 3, pp. 766–785, 2004.
 [708] J. Michel and Y. Herve, “Vhdlams behavioral model of an analog neural networks based on a fully parallel weight perturbation algorithm using incremental onchip learning,” in Industrial Electronics, 2004 IEEE International Symposium on, vol. 1. IEEE, 2004, pp. 211–216.
 [709] M. Milev and M. Hristov, “Analog implementation of ann with inherent quadratic nonlinearity of the synapses,” Neural Networks, IEEE Transactions on, vol. 14, no. 5, pp. 1187–1200, 2003.
 [710] S. S. Modi, P. R. Wilson, and A. D. Brown, “Power aware learning for class ab analogue vlsi neural network,” in Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on. IEEE, 2006, pp. 4–pp.
 [711] A. J. Montalvo, R. S. Gyurcsik, and J. J. Paulos, “Building blocks for a temperaturecompensated analog vlsi neural network with onchip learning,” in Circuits and Systems, 1994. ISCAS’94., 1994 IEEE International Symposium on, vol. 6. IEEE, 1994, pp. 363–366.
 [712] ——, “An analog vlsi neural network with onchip perturbation learning,” SolidState Circuits, IEEE Journal of, vol. 32, no. 4, pp. 535–543, 1997.
 [713] ——, “Toward a generalpurpose analog vlsi neural network with onchip learning,” Neural Networks, IEEE Transactions on, vol. 8, no. 2, pp. 413–423, 1997.
 [714] I. P. Morns and S. S. Dlay, “Analog design of a new neural network for optical character recognition,” IEEE transactions on neural networks, vol. 10, no. 4, pp. 951–953, 1999.
 [715] D. B. Mundie and L. W. Massengill, “A simulation and training technique for analog neural network implementations,” in Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on, vol. 3. IEEE, 1994, pp. 1975–1980.
 [716] A. F. Murray, “Analog vlsi and multilayer perceptionsaccuracy, noise and onchip learning,” Neurocomputing, vol. 4, no. 6, pp. 301–310, 1992.
 [717] A. Nosratinia, M. Ahmadi, and M. Shridhar, “Implementation issues in a multistage feedforward analog neural network,” in Neural Networks, 1992. IJCNN., International Joint Conference on, vol. 2. IEEE, 1992, pp. 642–647.
 [718] H.J. Oh and F. M. Salam, “A modular analog chip for feedforward networks with onchip learning,” in Circuits and Systems, 1993., Proceedings of the 36th Midwest Symposium on. IEEE, 1993, pp. 766–769.
 [719] ——, “Analog cmos implementation of neural network for adaptive signal processing,” in Circuits and Systems, 1994. ISCAS’94., 1994 IEEE International Symposium on, vol. 6. IEEE, 1994, pp. 503–506.
 [720] C.H. Pan, H.Y. Hsieh, and K.T. Tang, “An analog multilayer perceptron neural network for a portable electronic nose,” Sensors, vol. 13, no. 1, pp. 193–207, 2012.
 [721] S. Pinjare, “Design and analog vlsi implementation of neural network architecture for signal processing,” European Journal of Scientific Research, vol. 27, no. 2, pp. 199–216, 2009.
 [722] O. Richter, R. F. Reinhart, S. Nease, J. Steil, and E. Chicca, “Device mismatch in a neuromorphic system implements random features for regression,” in Biomedical Circuits and Systems Conference (BioCAS), 2015 IEEE. IEEE, 2015, pp. 1–4.
 [723] F. M. Salam and M.R. Choi, “An allmos analog feedforward neural circuit with learning,” in Circuits and Systems, 1990., IEEE International Symposium on. IEEE, 1990, pp. 2508–2511.
 [724] S. Satyanarayana, Y. Tsividis, and H. Graf, “Analogue neural networks with distributed neurons,” Electronics Letters, vol. 25, no. 5, pp. 302–304, 1989.
 [725] R. Shimabukuro, P. Shoemaker, and M. Stewart, “Circuitry for artificial neural networks with nonvolatile analog memories,” in Circuits and Systems, 1989., IEEE International Symposium on. IEEE, 1989, pp. 1217–1220.
 [726] K. Soelberg, R. L. Sigvartsen, T. S. Lande, and Y. Berg, “An analog continuoustime neural network,” Analog Integrated Circuits and Signal Processing, vol. 5, no. 3, pp. 235–246, 1994.
 [727] L. Song, M. I. Elmasry, and A. Vannelli, “Analog neural network building blocks based on current mode subthreshold operation,” in Circuits and Systems, 1993., ISCAS’93, 1993 IEEE International Symposium on. IEEE, 1993, pp. 2462–2465.
 [728] L.Y. Song, A. Vannelli, and M. I. Elmasry, “A compact vlsi implementation of neural networks,” in VLSI Artificial Neural Networks Engineering. Springer, 1994, pp. 139–156.
 [729] X. Sun, M. H. Chow, F. H. Leung, D. Xu, Y. Wang, and Y.S. Lee, “Analogue implementation of a neural network controller for ups inverter applications,” Power Electronics, IEEE Transactions on, vol. 17, no. 3, pp. 305–313, 2002.
 [730] S. M. Tam, B. Gupta, H. A. Castro, and M. Holler, “Learning on an analog vlsi neural network chip,” in Systems, Man and Cybernetics, 1990. Conference Proceedings., IEEE International Conference on. IEEE, 1990, pp. 701–703.
 [731] S. Tam, M. Holler, J. Brauch, A. Pine, A. Peterson, S. Anderson, and S. Deiss, “A reconfigurable multichip analog neural network: recognition and backpropagation training,” in Neural Networks, 1992. IJCNN., International Joint Conference on, vol. 2. IEEE, 1992, pp. 625–630.
 [732] R. Tawel, “Learning in analog neural network hardware,” Computers & electrical engineering, vol. 19, no. 6, pp. 453–467, 1993.
 [733] C. S. Thakur, T. J. Hamilton, R. Wang, J. Tapson, and A. van Schaik, “A neuromorphic hardware framework based on population coding,” in Neural Networks (IJCNN), 2015 International Joint Conference on. IEEE, 2015, pp. 1–8.
 [734] C. Thakur, R. Wang, T. Hamilton, J. Tapson, and A. van Schaik, “A low power trainable neuromorphic integrated circuit that is tolerant to device mismatch,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. PP, no. 99, pp. 1–11, 2016.
 [735] J. Tombs, L. Tarassenko, G. Cairns, and A. Murray, “Cascadability and insitu learning for vlsi multilayer networks,” in Artificial Neural Networks, 1993., Third International Conference on. IET, 1993, pp. 56–60.
 [736] Y. Tsividis and D. Anastassiou, “Switchedcapacitor neural networks,” Electronics Letters, vol. 23, no. 18, pp. 958–959, 1987.
 [737] M. Valle, D. D. Caviglia, and G. M. Bisio, “Backpropagation learning algorithms for analog vlsi implementation,” in VLSI for Neural Networks and Artificial Intelligence. Springer, 1994, pp. 35–44.
 [738] ——, “An experimental analog vlsi neural network with onchip backpropagation learning,” Analog Integrated Circuits and Signal Processing, vol. 9, no. 3, pp. 231–245, 1996.
 [739] J. Van der Spiegel, P. Mueller, D. Blackman, P. Chance, C. Donham, R. EtienneCummings, and P. Kinget, “An analog neural computer with modular architecture for realtime dynamic computations,” SolidState Circuits, IEEE Journal of, vol. 27, no. 1, pp. 82–92, 1992.
 [740] M. Walker, P. Hasler, and L. Akers, “A cmos neural network for pattern association,” IEEE Micro, no. 5, pp. 68–74, 1989.
 [741] Y. Wang, “Analog cmos implementation of backward error propagation,” in Neural Networks, 1993., IEEE International Conference on. IEEE, 1993, pp. 701–706.
 [742] K. Wawryn and B. Strzeszewski, “Current mode circuits for programmable wta neural network,” Analog Integrated Circuits and Signal Processing, vol. 27, no. 1, pp. 49–69, 2001.
 [743] K. Wawryn and A. Mazurek, “Low power, current mode circuits for programmable neural network,” in Circuits and Systems, 2001. ISCAS 2001. The 2001 IEEE International Symposium on, vol. 3. IEEE, 2001, pp. 628–631.
 [744] D. J. Weller and R. R. Spencer, “A process invariant analog neural network ic with dynamically refreshed weights,” in Circuits and Systems, 1990., Proceedings of the 33rd Midwest Symposium on. IEEE, 1990, pp. 273–276.
 [745] H. Withagen, “Implementing backpropagation with analog hardware,” in Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on, vol. 4. IEEE, 1994, pp. 2015–2017.
 [746] S. Wolpert, L. Lee, J. F. Heisler et al., “Circuits for a vlsibased standalone backpropagation neural network,” in Bioengineering Conference, 1992., Proceedings of the 1992 Eighteenth IEEE Annual Northeast. IEEE, 1992, pp. 47–48.
 [747] T. Yildirim and J. S. Marsland, “A conic section function network synapse and neuron implementation in vlsi hardware,” in Neural Networks, 1996., IEEE International Conference on, vol. 2. IEEE, 1996, pp. 974–979.
 [748] M. Yildiz, S. Minaei, and I. C. Göknar, “A cmos classifier circuit using neural networks with novel architecture,” Neural Networks, IEEE Transactions on, vol. 18, no. 6, pp. 1845–1850, 2007.
 [749] M. Lee, K. Hwang, and W. Sung, “Fault tolerance analysis of digital feedforward deep neural networks,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 5031–5035.
 [750] D. Christiani, C. Merkel, and D. Kudithipudi, “Invited: Towards a scalable neuromorphic hardware for classification and prediction with stochastic noprop algorithms,” in Quality Electronic Design (ISQED), 2016 17th International Symposium on. IEEE, 2016, pp. 124–128.
 [751] F. Distante, M. Sami, R. Stefanelli, and G. S. Gajani, “A configurable array architecture for wsi implementation of neural nets,” in Computers and Communications, 1990. Conference Proceedings., Ninth Annual International Phoenix Conference on. IEEE, 1990, pp. 44–51.
 [752] ——, “Area compaction in silicon structures for neural net implementation,” Microprocessing and microprogramming, vol. 28, no. 1, pp. 139–143, 1990.
 [753] W. Fornaciari and F. Salice, “A low latency digital neural network architecture,” in VLSI for Neural Networks and Artificial Intelligence. Springer, 1994, pp. 81–91.
 [754] D. Hammerstrom, “A vlsi architecture for highperformance, lowcost, onchip learning,” in Neural Networks, 1990., 1990 IJCNN International Joint Conference on. IEEE, 1990, pp. 537–544.
 [755] ——, “A highly parallel digital architecture for neural network emulation,” in VLSI for artificial intelligence and neural networks. Springer, 1991, pp. 357–366.
 [756] T. Islam, S. Ghosh, and H. Saha, “Annbased signal conditioning and its hardware implementation of a nanostructured porous silicon relative humidity sensor,” Sensors and Actuators B: Chemical, vol. 120, no. 1, pp. 130–141, 2006.
 [757] Y.C. Kim and M. Shanblatt, “An implementable digital multilayer neural network (dmnn),” in Neural Networks, 1992. IJCNN., International Joint Conference on, vol. 2. IEEE, 1992, pp. 594–600.
 [758] Y.C. Kim, M. Shanblatt et al., “A vlsibased digital multilayer neural network architecture,” in VLSI, 1993.’Design Automation of High Performance VLSI Systems’, Proceedings., Third Great Lakes Symposium on. IEEE, 1993, pp. 27–31.
 [759] S. Kumar, K. Forward, and M. Palaniswami, “Performance evaluation of a risc neuroprocessor for neural networks,” in High Performance Computing, 1996. Proceedings. 3rd International Conference on. IEEE, 1996, pp. 351–356.
 [760] S. Kung and J. Hwang, “Digital vlsi architectures for neural networks,” in Circuits and Systems, 1989., IEEE International Symposium on. IEEE, 1989, pp. 445–448.
 [761] L. Larsson, S. Krol, and K. Lagemann, “Neneban application adjustable single chip neural network processor for mobile real time image processing,” in Neural Networks for Identification, Control, Robotics, and Signal/Image Processing, 1996. Proceedings., International Workshop on. IEEE, 1996, pp. 154–162.
 [762] D. Myers and G. Brebner, “The implementation of hardware neural net systems,” in Artificial Neural Networks, 1989., First IEE International Conference on (Conf. Publ. No. 313). IET, 1989, pp. 57–61.
 [763] S. Pakzad and P. Plaskonos, “Implementation of a digital modular chip for a reconfigurable artificial neural network,” in PARLE’93 Parallel Architectures and Languages Europe. Springer, 1993, pp. 700–703.
 [764] G. G. Pechanek, S. Vassiliadis, J. G. DelgadoFrias, and G. Triantafyllos, “Scalable completely connected digital neural networks,” in Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on, vol. 4. IEEE, 1994, pp. 2078–2083.
 [765] P. Plaskonos, S. Pakzad, B. Jin, and A. Hurson, “Design of a modular chip for a reconfigurable artificial neural network,” in Developing and Managing Intelligent System Projects, 1993., IEEE International Conference on. IEEE, 1993, pp. 55–62.
 [766] S. Popescu, “Hardware implementation of fast neural networks using cpld,” in Neural Network Applications in Electrical Engineering, 2000. NEUREL 2000. Proceedings of the 5th Seminar on. IEEE, 2000, pp. 121–124.
 [767] T. Szabó, L. Antoni, G. Horváth, and B. Fehér, “A fullparallel digital implementation for pretrained nns,” in Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEEINNSENNS International Joint Conference on, vol. 2. IEEE, 2000, pp. 49–54.
 [768] C. Tang and H. Kwan, “Digital implementation of neural networks with quantized neurons,” in Circuits and Systems, 1997. ISCAS’97., Proceedings of 1997 IEEE International Symposium on, vol. 1. IEEE, 1997, pp. 649–652.
 [769] M. S. Tomlinson Jr and D. J. Walker, “Dnna: A digital neural network architecture,” in International Neural Network Conference. Springer, 1990, pp. 589–592.
 [770] M. S. Tomlinson Jr, D. J. Walker, and M. A. Sivilotti, “A digital neural network architecture for vlsi,” in Neural Networks, 1990., 1990 IJCNN International Joint Conference on. IEEE, 1990, pp. 545–550.
 [771] E. Torbey and B. Haroun, “Architectural synthesis for digital neural networks,” in Neural Networks, 1992. IJCNN., International Joint Conference on, vol. 2. IEEE, 1992, pp. 601–606.
 [772] D. Zhang, G. A. Jullien, W. C. Miller, and E. Swartzlander Jr, “Arithmetic for digital neural networks,” in Computer Arithmetic, 1991. Proceedings., 10th IEEE Symposium on. IEEE, 1991, pp. 58–63.
 [773] K. Aihara, O. Fujita, and K. Uchimura, “A digital neural network lsi using sparse memory access architecture,” in Microelectronics for Neural Networks, 1996., Proceedings of Fifth International Conference on. IEEE, 1996, pp. 139–148.
 [774] N. Avellana, A. Strey, R. Holgado, J. A. Fernandes, R. Capillas, and E. Valderrama, “Design of a lowcost and highspeed neurocomputer system,” in Microelectronics for Neural Networks, 1996., Proceedings of Fifth International Conference on. IEEE, 1996, pp. 221–226.
 [775] J. Ayala, M. LópezVallejo et al., “Design of a pipelined hardware architecture for realtime neural network computations,” in Circuits and Systems, 2002. MWSCAS2002. The 2002 45th Midwest Symposium on, vol. 1. IEEE, 2002, pp. I–419.
 [776] A. Bermak and D. Martinez, “Digital vlsi implementation of a multiprecision neural network classifier,” in Neural Information Processing, 1999. Proceedings. ICONIP’99. 6th International Conference on, vol. 2. IEEE, 1999, pp. 560–565.
 [777] A. Bermak and A. Bouzerdoum, “Vlsi implementation of a neural network classifier based on the saturating linear activation function,” in Neural Information Processing, 2002. ICONIP’02. Proceedings of the 9th International Conference on, vol. 2. IEEE, 2002, pp. 981–985.
 [778] C.F. Chang and B. Sheu, “Digital vlsi multiprocessor design for neurocomputers,” in Neural Networks, 1992. IJCNN., International Joint Conference on, vol. 2. IEEE, 1992, pp. 1–6.
 [779] F. Distante, M. Sami, R. Stefanelli, and G. StortiGajani, “A compact and fast silicon implementation for layered neural nets,” in VLSI for Artificial Intelligence and Neural Networks. Springer, 1991, pp. 345–355.
 [780] M. Duranton and N. Mauduit, “A general purpose digital architecture for neural network simulations,” in Artificial Neural Networks, 1989., First IEE International Conference on (Conf. Publ. No. 313). IET, 1989, pp. 62–66.
 [781] H. Faiedh, Z. Gafsi, K. Torki, and K. Besbes, “Digital hardware implementation of a neural network used for classification,” in Microelectronics, 2004. ICM 2004 Proceedings. The 16th International Conference on. IEEE, 2004, pp. 551–554.
 [782] C. Joseph and A. Gupta, “A novel hardware efficient digital neural network architecture implemented in 130nm technology,” in Computer and Automation Engineering (ICCAE), 2010 The 2nd International Conference on, vol. 3. IEEE, 2010, pp. 82–87.
 [783] D. Y. Kim, J. M. Kim, H. Jang, J. Jeong, and J. W. Lee, “A neural network accelerator for mobile application processors,” IEEE Transactions on Consumer Electronics, vol. 61, no. 4, pp. 555–563, 2015.
 [784] D. Orrey, D. Myers, and J. Vincent, “A high performance digital processor for implementing large artificial neural networks,” in Custom Integrated Circuits Conference, 1991., Proceedings of the IEEE 1991. IEEE, 1991, pp. 16–3.
 [785] J. Tuazon, K. Hamidian, and L. Guyette, “A new digital neural network and its application,” in Electrical and Computer Engineering, 1993. Canadian Conference on. IEEE, 1993, pp. 481–485.
 [786] J. Vincent and D. Myers, “Parameter selection for digital realisations of neural networks,” in Neural Networks: Design Techniques and Tools, IEE Colloquium on. IET, 1991, pp. 7–1.
 [787] M. Walker and L. Akers, “A neuromorphic approach to adaptive digital circuitry,” in Computers and Communications, 1988. Conference Proceedings., Seventh Annual International Phoenix Conference on. IEEE, 1988, pp. 19–23.
 [788] C. Alippi and M. E. Nigri, “Hardware requirements to digital vlsi implementation of neural networks,” in Neural Networks, 1991. 1991 IEEE International Joint Conference on. IEEE, 1991, pp. 1873–1878.
 [789] J.H. Chung, H. Yoon, and S. R. Maeng, “A systolic array exploiting the inherent parallelisms of artificial neural networks,” Microprocessing and Microprogramming, vol. 33, no. 3, pp. 145–159, 1992.
 [790] J. Cloutier and P. Y. Simard, “Hardware implementation of the backpropagation without multiplication,” in Microelectronics for Neural Networks and Fuzzy Systems, 1994., Proceedings of the Fourth International Conference on. IEEE, 1994, pp. 46–55.
 [791] M. Duranton and J. Sirat, “Learning on vlsi: A general purpose digital neurochip,” in Neural Networks, 1989. IJCNN., International Joint Conference on. IEEE, 1986, pp. 613–vol.
 [792] H. Eguchi, T. Furuta, H. Horiguchi, S. Oteki, and T. Kitaguchi, “Neural network lsi chip with onchip learning,” in Neural Networks, 1991., IJCNN91Seattle International Joint Conference on, vol. 1. IEEE, 1991, pp. 453–456.
 [793] Y. Kondo, Y. Koshiba, Y. Arima, M. Murasaki, T. Yamada, H. Amishiro, H. Shinohara, and H. Mori, “A 1.2 gflops neural network chip exhibiting fast convergence,” in SolidState Circuits Conference, 1994. Digest of Technical Papers. 41st ISSCC., 1994 IEEE International. IEEE, 1994, pp. 218–219.
 [794] H. Madokoro and K. Sato, “Hardware implementation of backpropagation neural networks for realtime video image learning and processing,” Journal of Computers, vol. 8, no. 3, pp. 559–566, 2013.
 [795] D. J. Myers, J. M. Vincent, and D. A. Orrey, “Hannibal: A vlsi building block for neural networks with onchip backpropagation learning,” Neurocomputing, vol. 5, no. 1, pp. 25–37, 1993.
 [796] S. Oteki, A. Hashimoto, T. Furuta, S. Motomura, T. Watanabe, D. Stork, and H. Eguchi, “A digital neural network vlsi with onchip learning using stochastic pulse encoding,” in Neural Networks, 1993. IJCNN’93Nagoya. Proceedings of 1993 International Joint Conference on, vol. 3. IEEE, 1993, pp. 3039–3045.
 [797] O. Saito, K. Aihara, O. Fujita, and K. Uchimura, “A 1m synapse selflearning digital neural network chip,” in SolidState Circuits Conference, 1998. Digest of Technical Papers. 1998 IEEE International. IEEE, 1998, pp. 94–95.
 [798] Y. Sato, K. Shibata, M. Asai, M. Ohki, M. Sugie, T. Sakaguchi, M. Hashimoto, and Y. Kuwabara, “Development of a highperformance general purpose neurocomputer composed of 512 digital neurons,” in Neural Networks, 1993. IJCNN’93Nagoya. Proceedings of 1993 International Joint Conference on, vol. 2. IEEE, 1993, pp. 1967–1970.
 [799] Z. Tang, O. Ishizuka, and H. Matsumoto, “Backpropagation learning in analog tmodel neural network hardware,” in Neural Networks, 1993. IJCNN’93Nagoya. Proceedings of 1993 International Joint Conference on, vol. 1. IEEE, 1993, pp. 899–902.
 [800] J. Theeten, M. Duranton, N. Mauduit, and J. Sirat, “The lneurochip: a digital vlsi with onchip learning mechanism,” in International Neural Network Conference. Springer, 1990, pp. 593–596.
 [801] Q. Wang, A. Li, Z. Li, and Y. Wan, “A design and implementation of reconfigurable architecture for neural networks based on systolic arrays,” in Advances in Neural NetworksISNN 2006. Springer, 2006, pp. 1328–1333.
 [802] J. Wawrzynek, K. Asanovic, and N. Morgan, “The design of a neuromicroprocessor,” IEEE transactions on neural networks/a publication of the IEEE Neural Networks Council, vol. 4, no. 3, pp. 394–399, 1992.
 [803] S. B. Yun, Y. J. Kim, S. S. Dong, and C. H. Lee, “Hardware implementation of neural network with expansible and reconfigurable architecture,” in Neural Information Processing, 2002. ICONIP’02. Proceedings of the 9th International Conference on, vol. 2. IEEE, 2002, pp. 970–975.
 [804] R. Inigo, A. Bonde, and B. Holcombe, “Self adjusting weights for hardware neural networks,” Electronics Letters, vol. 26, pp. 1630–1632, 1990.
 [805] H. Hikawa, “Digital pulse mode neural network with simple synapse multiplier,” in Circuits and Systems, 2001. ISCAS 2001. The 2001 IEEE International Symposium on, vol. 3. IEEE, 2001, pp. 569–572.
 [806] ——, “Hardware pulse mode neural network with piecewise linear activation function neurons,” in Circuits and Systems, 2002. ISCAS 2002. IEEE International Symposium on, vol. 2. IEEE, 2002, pp. II–524.
 [807] Y.C. Kim and M. A. Shanblatt, “Architecture and statistical model of a pulsemode digital multilayer neural network,” Neural Networks, IEEE Transactions on, vol. 6, no. 5, pp. 1109–1118, 1995.
 [808] ——, “Random noise effects in pulsemode digital multilayer neural networks,” Neural Networks, IEEE Transactions on, vol. 6, no. 1, pp. 220–229, 1995.
 [809] A. N. AlZeftawi, K. M. A. ElFattah, H. N. Shanan, and T. S. Kamel, “Cmos mixed digital analog reconfigurable neural network with gaussian synapses,” in Electrotechnical Conference, 2000. MELECON 2000. 10th Mediterranean, vol. 3. IEEE, 2000, pp. 1198–1201.
 [810] O. Barkan, W. Smith, and G. Persky, “Design of coupling resistor networks for neural network hardware,” Circuits and Systems, IEEE Transactions on, vol. 37, no. 6, pp. 756–765, 1990.
 [811] J. Binfet and B. M. Wilamowski, “Microprocessor implementation of fuzzy systems and neural networks,” in Neural Networks, 2001. Proceedings. IJCNN’01. International Joint Conference on, vol. 1. IEEE, 2001, pp. 234–239.
 [812] J.C. Bor and C.Y. Wu, “Pulsewidthmodulation feedforward neural network design with onchip learning,” in Circuits and Systems, 1996., IEEE Asia Pacific Conference on. IEEE, 1996, pp. 369–372.
 [813] B. E. Boser, E. Sackinger, J. Bromley, Y. Le Cun, and L. D. Jackel, “An analog neural network processor with programmable topology,” SolidState Circuits, IEEE Journal of, vol. 26, no. 12, pp. 2017–2025, 1991.
 [814] F. Camboni and M. Valle, “A mixed mode perceptron cell for vlsi neural networks,” in Electronics, Circuits and Systems, 2001. ICECS 2001. The 8th IEEE International Conference on, vol. 1. IEEE, 2001, pp. 377–380.
 [815] G. Cardarilli, C. D’Alessandro, P. Marinucci, and F. Bordoni, “Vlsi implementation of a modular and programmable neural architecture,” in Microelectronics for Neural Networks and Fuzzy Systems, 1994., Proceedings of the Fourth International Conference on. IEEE, 1994, pp. 218–225.
 [816] G. Cardarilli, G. Di Stefano, G. Fabrizi, and P. Marinucci, “Analysis and implementation of a vlsi neural network,” in Neural Networks, 1995. Proceedings., IEEE International Conference on, vol. 3. IEEE, 1995, pp. 1482–1486.
 [817] D. D. Corso, F. Gregoretti, L. Reyneri, and A. Allasia, “A pattern recognition demonstrator based on a silicon neural chip,” in SolidState Circuits Conference, 1992. ESSCIRC’92. Eighteenth European. IEEE, 1992, pp. 207–212.
 [818] K. Current and J. Current, “Cmos currentmode circuits for neural networks,” in Circuits and Systems, 1990., IEEE International Symposium on. IEEE, 1990, pp. 2971–2974.
 [819] I. Del Campo, J. Echanobe, G. Bosque, and J. M. Tarela, “Efficient hardware/software implementation of an adaptive neurofuzzy system,” Fuzzy Systems, IEEE Transactions on, vol. 16, no. 3, pp. 761–778, 2008.
 [820] H. Djahanshahi, M. Ahmadi, G. Jullien, and W. Miller, “A unified synapseneuron building block for hybrid vlsi neural networks,” in Circuits and Systems, 1996. ISCAS’96., Connecting the World., 1996 IEEE International Symposium on, vol. 3. IEEE, 1996, pp. 483–486.
 [821] ——, “A modular architecture for hybrid vlsi neural networks and its application in a smart photosensor,” in Neural Networks, 1996., IEEE International Conference on, vol. 2. IEEE, 1996, pp. 868–873.
 [822] H. Djahanshahi, M. Ahmadi, G. A. Jullien, and W. C. Miller, “Design and vlsi implementation of a unified synapseneuron architecture,” in VLSI, 1996. Proceedings., Sixth Great Lakes Symposium on. IEEE, 1996, pp. 228–233.
 [823] H. Djahanshahi, M. Ahmadi, G. Jullien, and W. Miller, “A selfscaling neural hardware structure that reduces the effect of some implementation errors,” in Neural Networks for Signal Processing [1997] VII. Proceedings of the 1997 IEEE Workshop. IEEE, 1997, pp. 588–597.
 [824] H. Djahanshahi, M. Ahmadi, G. A. Jullien, and W. C. Miller, “Neural network integrated circuits with singleblock mixedsignal arrays,” in Signals, Systems & Computers, 1997. Conference Record of the ThirtyFirst Asilomar Conference on, vol. 2. IEEE, 1997, pp. 1130–1135.
 [825] B. Erkmen, R. A. Vural, N. Kahraman, and T. Yildirim, “A mixed mode neural network circuitry for object recognition application,” Circuits, Systems, and Signal Processing, vol. 32, no. 1, pp. 29–46, 2013.
 [826] S. M. Fakhraie, H. Farshbaf, and K. C. Smith, “Scalable closedboundary analog neural networks,” Neural Networks, IEEE Transactions on, vol. 15, no. 2, pp. 492–504, 2004.
 [827] J. Fieres, A. Grübl, S. Philipp, K. Meier, J. Schemmel, and F. Schürmann, “A platform for parallel operation of vlsi neural networks,” in Proc. of the 2004 Brain Inspired Cognitive Systems Conference (BICS2004), 2004.
 [828] J. Franca et al., “A mixedmode architecture for implementation of analog neural networks with digital programmability,” in Neural Networks, 1993. IJCNN’93Nagoya. Proceedings of 1993 International Joint Conference on, vol. 1. IEEE, 1993, pp. 887–890.
 [829] H. P. Graf, R. Janow, D. Henderson, and R. Lee, “Reconfigurable neural net chip with 32k connections.” in NIPS, 1990, pp. 1032–1038.
 [830] L. D. Jackel, H. Graf, and R. Howard, “Electronic neural network chips,” Applied optics, vol. 26, no. 23, pp. 5077–5080, 1987.
 [831] L. Jackel, B. Boser, H. Graf, J. Denker, Y. Le Cun, D. Henderson, O. Matan, R. Howard, and H. Baird, “Vlsi implementations of electronic neural networks: An example in character recognition,” in Systems, Man and Cybernetics, 1990. Conference Proceedings., IEEE International Conference on. IEEE, 1990, pp. 320–322.
 [832] L. Jackel, B. Boser, J. Denker, H. Graf, Y. Le Cun, I. Guyon, D. Henderson, R. Howard, W. Hubbard, and S. Solla, “Hardware requirements for neuralnet optical character recognition,” in Neural Networks, 1990., 1990 IJCNN International Joint Conference on. IEEE, 1990, pp. 855–861.
 [833] D. Johnson, J. Marsland, and W. Eccleston, “Neural network implementation using a single most per synapse,” Neural Networks, IEEE Transactions on, vol. 6, no. 4, pp. 1008–1011, 1995.
 [834] V. E. Koosh and R. Goodman, “Vlsi neural network with digital weights and analog multipliers,” in Circuits and Systems, 2001. ISCAS 2001. The 2001 IEEE International Symposium on, vol. 3. IEEE, 2001, pp. 233–236.
 [835] V. F. Koosh and R. M. Goodman, “Analog vlsi neural network with digital perturbative learning,” Circuits and Systems II: Analog and Digital Signal Processing, IEEE Transactions on, vol. 49, no. 5, pp. 359–368, 2002.
 [836] J. H. Lee, X. Ma, and K. Likharev, “Cmol crossnets: Possible neuromorphic nanoelectronic circuits,” Advances in Neural Information Processing Systems, vol. 18, p. 755, 2006.

[837]
T. Lehmann, E. Bruun, and C. Dietrich, “Mixed analog/digital matrixvector multiplier for neural network synapses,”
Analog Integrated Circuits and Signal Processing, vol. 9, no. 1, pp. 55–63, 1996.  [838] K. K. Likharev, “Crossnets: Neuromorphic hybrid cmos/nanoelectronic networks,” Advanced Materials, vol. 3, pp. 322–331, 2011.
 [839] J. Liu, M. A. Brooke, and K. Hirotsu, “A cmos feedforward neuralnetwork chip with onchip parallel learning for oscillation cancellation,” Neural Networks, IEEE Transactions on, vol. 13, no. 5, pp. 1178–1186, 2002.
 [840] Y. Liu, L. Zhang, S. Shan, P. Sun, X. Yang, B. Wang, D. Shi, P. Hui, and X. Lin, “Research on compensation for pressure sensor thermal zero drift based on back propagation neural network implemented by hardware circuits,” in Natural Computation (ICNC), 2010 Sixth International Conference on, vol. 2. IEEE, 2010, pp. 796–800.
 [841] Y. Maeda, A. Nakazawa, and Y. Kanata, “Hardware implementation of a pulse density neural network using simultaneous perturbation learning rule,” Analog Integrated Circuits and Signal Processing, vol. 18, no. 23, pp. 153–162, 1999.
 [842] P. Masa, K. Hoen, and H. Wallinga, “High speed vlsi neural network for highenergy physics,” in Microelectronics for Neural Networks and Fuzzy Systems, 1994., Proceedings of the Fourth International Conference on. IEEE, 1994, pp. 422–428.
 [843] B. Maundy and E. ElMasry, “Switchedcapacitor realisations of artificial neural network learning schemes,” Electronics Letters, vol. 27, no. 1, pp. 85–87, 1991.
 [844] M. Mirhassani, M. Ahmadi, and W. C. Miller, “A mixedsignal vlsi neural network with onchip learning,” in Electrical and Computer Engineering, 2003. IEEE CCECE 2003. Canadian Conference on, vol. 1. IEEE, 2003, pp. 591–594.
 [845] M. Mirhassani, M. Ahmadi, and W. Miller, “A new mixedsignal feedforward neural network with onchip learning,” in Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on, vol. 3. IEEE, 2004, pp. 1729–1734.
 [846] M. Mirhassani, M. Ahmadi, and W. C. Miller, “Design and implementation of novel multilayer mixedsignal onchip neural networks,” in Circuits and Systems, 2005. 48th Midwest Symposium on. IEEE, 2005, pp. 413–416.
 [847] ——, “A feedforward timemultiplexed neural network with mixedsignal neuron–synapse arrays,” Microelectronic engineering, vol. 84, no. 2, pp. 300–307, 2007.
 [848] T. Morie, J. Funakoshi, M. Nagata, and A. Iwata, “An analogdigital merged neural circuit using pulse width modulation technique,” Analog Integrated Circuits and Signal Processing, vol. 25, no. 3, pp. 319–328, 2000.
 [849] A. Nosratinia, M. Ahmadi, M. Shridhar, and G. Jullien, “A hybrid architecture for feedforward multilayer neural networks,” in Circuits and Systems, 1992. ISCAS’92. Proceedings., 1992 IEEE International Symposium on, vol. 3. IEEE, 1992, pp. 1541–1544.
 [850] H. Pan, M. Manic, X. Li, and B. Wilamowski, “Multilevel logic multiplier using vlsi neural network,” in Industrial Technology, 2003 IEEE International Conference on, vol. 1. IEEE, 2003, pp. 327–332.
 [851] A. Passos Almeida and J. Franca, “A mixedmode architecture for implementation of analog neural networks with digital programmability,” in Neural Networks, 1993. IJCNN’93Nagoya. Proceedings of 1993 International Joint Conference on, vol. 1. IEEE, 1993, pp. 887–890.
 [852] E. Sackinger, B. E. Boser, J. Bromley, Y. LeCun, and L. D. Jackel, “Application of the anna neural network chip to highspeed character recognition.” IEEE transactions on neural networks/a publication of the IEEE Neural Networks Council, vol. 3, no. 3, pp. 498–505, 1991.
 [853] E. Säckinger, B. E. Boser, and L. D. Jackel, “A neurocomputer board based on the anna neural network chip,” in Advances in Neural Information Processing Systems, 1992, pp. 773–780.
 [854] G. Sanchez, T. J. Koickal, T. Sripad, L. C. Gouveia, A. Hamilton, J. Madrenas et al., “Spikebased analogdigital neuromorphic information processing system for sensor applications,” in Circuits and Systems (ISCAS), 2013 IEEE International Symposium on. IEEE, 2013, pp. 1624–1627.
 [855] J. Schemmel, S. Hohmann, K. Meier, and F. Schürmann, “A mixedmode analog neural network using currentsteering synapses,” Analog Integrated Circuits and Signal Processing, vol. 38, no. 23, pp. 233–244, 2004.
 [856] A. Schmid, Y. Leblebici, and D. Mlynek, “A twostage chargebased analog/digital neuron circuit with adjustable weights,” in Neural Networks, 1999. IJCNN’99. International Joint Conference on, vol. 4. IEEE, 1999, pp. 2357–2362.
 [857] D. Van den Bout, P. Franzon, J. Paulos, T. Miller, W. Snyder, T. Nagle, and W. Liu, “Scalable vlsi implementations for neural networks,” Journal of VLSI signal processing systems for signal, image and video technology, vol. 1, no. 4, pp. 367–385, 1990.
 [858] K. Watanabe, L. Wang, H.W. Cha, and S. Ogawa, “A currentmode approach to cmos neural network implementation,” in Algorithms and Architectures for Parallel Processing, 1997. ICAPP 97., 1997 3rd International Conference on. IEEE, 1997, pp. 625–637.
 [859] J. Yang, M. Ahmadi, G. A. Jullien, and W. C. Miller, “An intheloop training method for vlsi neural networks,” in Circuits and Systems, 1999. ISCAS’99. Proceedings of the 1999 IEEE International Symposium on, vol. 5. IEEE, 1999, pp. 619–622.
 [860] N. Yazdi, M. Ahmadi, G. A. Jullien, and M. Shridhar, “Pipelined analog multilayer feedforward neural networks,” in Circuits and Systems, 1993., ISCAS’93, 1993 IEEE International Symposium on. IEEE, 1993, pp. 2768–2771.
 [861] G. Zatorre, N. Medrano, M. T. Sanz, P. Martínez, S. Celma et al., “Mixedmode artificial neuron for cmos integration,” in Electrotechnical Conference, 2006. MELECON 2006. IEEE Mediterranean. IEEE, 2006, pp. 381–384.
 [862] J. Bürger and C. Teuscher, “Volatile memristive devices as shortterm memory in a neuromorphic learning architecture,” in Proceedings of the 2014 IEEE/ACM International Symposium on Nanoscale Architectures. ACM, 2014, pp. 104–109.
 [863] Z. Dong, S. Duan, X. Hu, L. Wang, and H. Li, “A novel memristive multilayer feedforward smallworld neural network with its applications in pid control,” The Scientific World Journal, vol. 2014, 2014.
 [864] A. Emelyanov, V. Demin, D. Lapkin, V. Erokhin, S. Battistoni, G. Baldi, S. Iannotta, P. Kashkarov, and M. Kovalchuk, “Panibased neuromorphic networks???????????? first results and close perspectives,” in Memristive Systems (MEMRISYS) 2015 International Conference on. IEEE, 2015, pp. 1–2.
 [865] A. Emelyanov, D. Lapkin, V. Demin, V. Erokhin, S. Battistoni, G. Baldi, A. Dimonte, A. Korovin, S. Iannotta, P. Kashkarov et al., “First steps towards the realization of a double layer perceptron based on organic memristive devices,” AIP Advances, vol. 6, no. 11, p. 111301, 2016.
 [866] H. Manem, K. Beckmann, M. Xu, R. Carroll, R. Geer, and N. C. Cady, “An extendable multipurpose 3d neuromorphic fabric using nanoscale memristors,” in Computational Intelligence for Security and Defense Applications (CISDA), 2015 IEEE Symposium on. IEEE, 2015, pp. 1–8.
 [867] C. Merkel, D. Kudithipudi, and R. Ptucha, “Heterogeneous cmos/memristor hardware neural networks for realtime target classification,” in SPIE Sensing Technology+ Applications. International Society for Optics and Photonics, 2014, pp. 911 908–911 908.
 [868] O. Šuch, M. Klimo, and O. Škvarek, “Phoneme discrimination using a pair of neurons built from crs fuzzy logic gates,” in PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON NUMERICAL ANALYSIS AND APPLIED MATHEMATICS 2014 (ICNAAM2014), vol. 1648. AIP Publishing, 2015, p. 280010.
 [869] P. Dong, G. L. Bilbro, and M.Y. Chow, “Implementation of artificial neural network for real time applications using field programmable analog arrays,” in Neural Networks, 2006. IJCNN’06. International Joint Conference on. IEEE, 2006, pp. 1518–1524.
 [870] M. Liu, H. Yu, and W. Wang, “Fpaa based on integration of cmos and nanojunction devices for neuromorphic applications,” in NanoNet. Springer, 2009, pp. 44–48.
 [871] F. D. Baptista and F. MorgadoDias, “Automatic generalpurpose neural hardware generator,” Neural Computing and Applications, pp. 1–12, 2015.
 [872] J. Bastos, H. Figueroa, and A. Monti, “Fpga implementation of neural networkbased controllers for power electronics applications,” in Applied Power Electronics Conference and Exposition, 2006. APEC’06. TwentyFirst Annual IEEE. IEEE, 2006, pp. 6–pp.
 [873] J. Blake, L. Maguire, T. McGinnity, and L. McDaid, “Using xilinx fpgas to implement neural networks and fuzzy systems,” in Neural and Fuzzy Systems: Design, Hardware and Applications (Digest No: 1997/133), IEE Colloquium on. IET, 1997, pp. 1–1.
 [874] J. Blake, L. P. Maguire, T. McGinnity, B. Roche, and L. McDaid, “The implementation of fuzzy systems, neural networks and fuzzy neural networks using fpgas,” Information Sciences, vol. 112, no. 1, pp. 151–168, 1998.
 [875] M. Bohrn, L. Fujcik, and R. Vrba, “Field programmable neural array for feedforward neural networks,” in Telecommunications and Signal Processing (TSP), 2013 36th International Conference on. IEEE, 2013, pp. 727–731.
 [876] M. Bonnici, E. J. Gatt, J. Micallef, and I. Grech, “Artificial neural network optimization for fpga,” in Electronics, Circuits and Systems, 2006. ICECS’06. 13th IEEE International Conference on. IEEE, 2006, pp. 1340–1343.
 [877] N. M. Botros and M. AbdulAziz, “Hardware implementation of an artificial neural network,” in Neural Networks, 1993., IEEE International Conference on. IEEE, 1993, pp. 1252–1257.
 [878] A. L. Braga, J. AriasGarcia, C. Llanos, M. Dorn, A. Foltran, and L. S. Coelho, “Hardware implementation of gmdhtype artificial neural networks and its use to predict approximate threedimensional structures of proteins,” in Reconfigurable Communicationcentric SystemsonChip (ReCoSoC), 2012 7th International Workshop on. IEEE, 2012, pp. 1–8.
 [879] L. Brunelli, E. U. Melcher, A. V. De Brito, and R. Freire, “A novel approach to reduce interconnect complexity in ann hardware implementation,” in Neural Networks, 2005. IJCNN’05. Proceedings. 2005 IEEE International Joint Conference on, vol. 5. IEEE, 2005, pp. 2861–2866.
 [880] N. Chujo, S. Kuroyanagi, S. Doki, and S. Okuma, “An iterative calculation method of the neuron model for hardware implementation,” in Industrial Electronics Society, 2000. IECON 2000. 26th Annual Confjerence of the IEEE, vol. 1. IEEE, 2000, pp. 664–671.
 [881] B. Deng, M. Zhang, F. Su, J. Wang, X. Wei, and B. Shan, “The implementation of feedforward network on field programmable gate array,” in Biomedical Engineering and Informatics (BMEI), 2014 7th International Conference on. IEEE, 2014, pp. 483–487.
 [882] P. D. Deotale and L. Dole, “Design of fpga based general purpose neural network,” in Information Communication and Embedded Systems (ICICES), 2014 International Conference on. IEEE, 2014, pp. 1–5.
 [883] A. Dinu and M. Cirstea, “A digital neural network fpga direct hardware implementation algorithm,” in Industrial Electronics, 2007. ISIE 2007. IEEE International Symposium on. IEEE, 2007, pp. 2307–2312.
 [884] A. Dinu, M. N. Cirstea, and S. E. Cirstea, “Direct neuralnetwork hardwareimplementation algorithm,” Industrial Electronics, IEEE Transactions on, vol. 57, no. 5, pp. 1845–1848, 2010.
 [885] P. Dondon, J. Carvalho, R. Gardere, P. Lahalle, G. Tsenov, and V. Mladenov, “Implementation of a feedforward artificial neural network in vhdl on fpga,” in Neural Network Applications in Electrical Engineering (NEUREL), 2014 12th Symposium on. IEEE, 2014, pp. 37–40.
 [886] Y. Dong, C. Li, Z. Lin, and T. Watanabe, “A hybrid layermultiplexing and pipeline architecture for efficient fpgabased multilayer neural network,” Nonlinear Theory and Its Applications, IEICE, vol. 2, no. 4, pp. 522–532, 2011.
 [887] G.P. Economou, E. Mariatos, N. Economopoulos, D. Lymberopoulos, and C. Goutis, “Fpga implementation of artificial neural networks: an application on medical expert systems,” in Microelectronics for Neural Networks and Fuzzy Systems, 1994., Proceedings of the Fourth International Conference on. IEEE, 1994, pp. 287–293.
 [888] P. Ehkan, L. Y. Ann, F. F. Zakaria, and M. N. M. Warip, “Artificial neural network for character recognition on embeddedbased fpga,” in Future Information Technology. Springer, 2014, pp. 281–287.
 [889] S. Erdogan and A. Wahab, “Design of rmnc: A reconfigurable neurocomputer for massively parallelpipelined computations,” in Neural Networks, 1992. IJCNN., International Joint Conference on, vol. 2. IEEE, 1992, pp. 33–38.
 [890] P. Ferreira, P. Ribeiro, A. Antunes, and F. Dias, “Artificial neural networks processor–a hardware implementation using a fpga,” Field Programmable Logic and Application, pp. 1084–1086, 2004.
 [891] P. Ferreira, P. Ribeiro, A. Antunes, and F. M. Dias, “A high bit resolution fpga implementation of a fnn with a new algorithm for the activation function,” Neurocomputing, vol. 71, no. 1, pp. 71–77, 2007.
 [892] J. Granado, M. Vega, R. Pérez, J. Sànchez, J. Gómez et al., “Using fpgas to implement artificial neural networks,” in Electronics, Circuits and Systems, 2006. ICECS’06. 13th IEEE International Conference on. IEEE, 2006, pp. 934–937.
 [893] S. A. Guccione and M. J. Gonzalez, “A neural network implementation using reconfigurable architectures,” in Selected papers from the Oxford 1993 international workshop on field programmable logic and applications on More FPGAs, Oxford: Abingdon EE&CS Books, 1994, pp. 443–451.
 [894] S. Hariprasath and T. Prabakar, “Fpga implementation of multilayer feed forward neural network architecture using vhdl,” in Computing, Communication and Applications (ICCCA), 2012 International Conference on. IEEE, 2012, pp. 1–6.
 [895] H. M. Hasanien, “Fpga implementation of adaptive ann controller for speed regulation of permanent magnet stepper motor drives,” Energy Conversion and Management, vol. 52, no. 2, pp. 1252–1257, 2011.
 [896] H. Hikawa, “Implementation of simplified multilayer neural networks with onchip learning,” in Neural Networks, 1995. Proceedings., IEEE International Conference on, vol. 4. IEEE, 1995, pp. 1633–1637.
 [897] S. Himavathi, D. Anitha, and A. Muthuramalingam, “Feedforward neural network implementation in fpga using layer multiplexing for effective resource utilization,” Neural Networks, IEEE Transactions on, vol. 18, no. 3, pp. 880–888, 2007.
 [898] G. R. Hoelzle and F. M. Dias, “Hardware implementation of an artificial neural network with an embedded microprocessor in a fpga,” in 8th International Conference and Workshop on Ambient Intelligence and Embedded Systems?, Funchal, 2009.
 [899] M. Hoffman, P. Bauer, B. Hemrnelman, and A. Hasan, “Hardware synthesis of artificial neural networks using field programmable gate arrays and fixedpoint numbers,” in Region 5 Conference, 2006 IEEE. IEEE, 2006, pp. 324–328.
 [900] N. Izeboudjen, A. Farah, S. Titri, and H. Boumeridja, “Digital implementation of artificial neural networks: from vhdl description to fpga implementation,” in Engineering Applications of BioInspired Artificial Neural Networks. Springer, 1999, pp. 139–148.
 [901] R. Joost and R. Salomon, “Time coding output neurons in digital artificial neural networks,” in Neural Networks (IJCNN), The 2012 International Joint Conference on. IEEE, 2012, pp. 1–8.
 [902] S. Jung and S. S. Kim, “Hardware implementation of a realtime neural network controller with a dsp and an fpga for nonlinear systems,” Industrial Electronics, IEEE Transactions on, vol. 54, no. 1, pp. 265–271, 2007.
 [903] S. S. Kim and S. Jung, “Hardware implementation of a real time neural network controller with a dsp and an fpga,” in Robotics and Automation, 2004. Proceedings. ICRA’04. 2004 IEEE International Conference on, vol. 5. IEEE, 2004, pp. 4639–4644.
 [904] M. Krcma, J. Kastil, and Z. Kotasek, “Mapping trained neural networks to fpnns,” in Design and Diagnostics of Electronic Circuits & Systems (DDECS), 2015 IEEE 18th International Symposium on. IEEE, 2015, pp. 157–160.
 [905] M. Krips, T. Lammert, and A. Kummert, “Fpga implementation of a neural network for a realtime hand tracking system,” in Electronic Design, Test and Applications, 2002. Proceedings. The First IEEE International Workshop on. IEEE, 2002, pp. 313–317.
 [906] C.h. Kung, M. J. Devaney, C.M. Kung, C.m. Huang, Y.j. Wang, and C.T. Kuo, “The vlsi implementation of an artificial neural network scheme embedded in an automated inspection quality management system,” in Instrumentation and Measurement Technology Conference, 2002. IMTC/2002. Proceedings of the 19th IEEE, vol. 1. IEEE, 2002, pp. 239–244.
 [907] A. Laudani, G. M. Lozito, F. R. Fulginei, and A. Salvini, “An efficient architecture for floating point based miso neural neworks on fpga,” in Computer Modelling and Simulation (UKSim), 2014 UKSimAMSS 16th International Conference on. IEEE, 2014, pp. 12–17.
 [908] U. Lotrič and P. Bulić, “Applicability of approximate multipliers in hardware neural networks,” Neurocomputing, vol. 96, pp. 57–65, 2012.
 [909] G.M. Lozito, A. Laudani, F. R. Fulginei, and A. Salvini, “Fpga implementations of feed forward neural network by using floating point hardware accelerators,” Advances in Electrical and Electronic Engineering, vol. 12, no. 1, p. 30, 2014.
 [910] H. H. Makwana, D. J. Shah, and P. P. Gandhi, “Fpga implementation of artificial neural network,” International Journal of Emerging Technology and Advanced Engineering, vol. 3, no. 1, 2013.
 [911] N. P. Mand, F. Robino, and J. Oberg, “Artificial neural network emulation on noc based multicore fpga platform,” in NORCHIP, 2012. IEEE, 2012, pp. 1–4.
 [912] K. Mohamad, M. F. O. Mahmud, F. H. Adnan, and W. F. H. Abdullah, “Design of single neuron on fpga,” in Humanities, Science and Engineering Research (SHUSER), 2012 IEEE Symposium on. IEEE, 2012, pp. 133–136.
 [913] E. Z. Mohammed and H. K. Ali, “Hardware implementation of artificial neural network using field programmable gate array,” International Journal of Computer Theory and Engineering, vol. 5, no. 5, 2013.
 [914] A. Muthuramalingam, S. Himavathi, and E. Srinivasan, “Neural network implementation using fpga: issues and application,” International journal of information technology, vol. 4, no. 2, pp. 86–92, 2008.
 [915] B. Noory and V. Groza, “A reconfigurable approach to hardware implementation of neural networks,” in Electrical and Computer Engineering, 2003. IEEE CCECE 2003. Canadian Conference on, vol. 3. IEEE, 2003, pp. 1861–1864.
 [916] Ş. ONIGA and A. BUCHMAN, “A new method for hardware implementation of artificial neural network used in smart sensors,” in The 10th International Symposium for Design and Technology of Electronic Packages Conference Proceedings, Bucureşti. Citeseer, 2004, pp. 23–26.
 [917] S. Oniga, A. Tisan, D. Mic, A. Buchman, and A. VidaRatiu, “Hand postures recognition system using artificial neural networks implemented in fpga,” in Electronics Technology, 30th International Spring Seminar on. IEEE, 2007, pp. 507–512.
 [918] ——, “Optimizing fpga implementation of feedforward neural networks,” in Proceedings of the 11th International Conference on Optimization of Electrical and Electronic Equipment OPTIM, vol. 2008, 2008, pp. 22–23.
 [919] S. Oniga, A. Tisan, D. Mic, C. Lung, I. Orha, A. Buchman, and A. VidaRatiu, “Fpga implementation of feedforward neural networks for smart devices development,” in Signals, Circuits and Systems, 2009. ISSCS 2009. International Symposium on. IEEE, 2009, pp. 1–4.
 [920] T. OrlowskaKowalska and M. Kaminski, “Fpga implementation of the multilayer neural network for the speed estimation of the twomass drive system,” Industrial Informatics, IEEE Transactions on, vol. 7, no. 3, pp. 436–445, 2011.
 [921] A. PérezUribe and E. Sanchez, “Fpga implementation of an adaptablesize neural network,” in Artificial Neural Networks?ICANN 96. Springer, 1996, pp. 383–388.
 [922] Y. Qi, B. Zhang, T. M. Taha, H. Chen, and R. Hasan, “Fpga design of a multicore neuromorphic processing system,” in Aerospace and Electronics Conference, NAECON 2014IEEE National. IEEE, 2014, pp. 255–258.
 [923] R. Raeisi and A. Kabir, “Implementation of artificial neural network on fpga,” in American Society for Engineering Education. Citeseer, 2006.
 [924] H. F. Restrepo, R. Hoffmann, A. PerezUribe, C. Teuscher, and E. Sanchez, “A networked fpgabased hardware implementation of a neural network application,” in FieldProgrammable Custom Computing Machines, 2000 IEEE Symposium on. IEEE, 2000, pp. 337–338.
 [925] S. Sahin, Y. Becerikli, and S. Yazici, “Neural network implementation in hardware using fpgas,” in Neural Information Processing. Springer, 2006, pp. 1105–1112.
 [926] V. Salapura, M. Gschwind, and O. Maischberger, “A fast fpga implementation of a general purpose neuron,” in FieldProgrammable Logic Architectures, Synthesis and Applications. Springer, 1994, pp. 175–182.
 [927] B. Salem, A. Karim, S. B. Othman, and S. B. Saoud, “Design and implementation of a neural command rule on a fpga circuit,” in Electronics, Circuits and Systems, 2005. ICECS 2005. 12th IEEE International Conference on. IEEE, 2005, pp. 1–4.
 [928] A. Savran and S. Ünsal, “Hardware implementation of a feed forward neural network using fpgas,” in The third International Conference on Electrical and Electronics Engineering (ELECO 2003), 2003, pp. 3–7.
 [929] S. Shaari, H. Mekki, N. Khorissi et al., “Fpgabased artificial neural network for prediction of solar radiation data from sunshine duration and air temperature,” in Computational Technologies in Electrical and Electronics Engineering, 2008. SIBIRCON 2008. IEEE Region 8 International Conference on. IEEE, 2008, pp. 118–123.
 [930] S. K. Shah and D. D. Vishwakarma, “Fpga implementation of ann for reactive routing protocols in manet,” in Communication, Networks and Satellite (ComNetSat), 2012 IEEE International Conference on. IEEE, 2012, pp. 11–14.
 [931] S. Shreejith, B. Anshuman, and S. A. Fahmy, “Accelerated artificial neural networks on fpga for fault detection in automotive systems,” in 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2016, pp. 37–42.
 [932] A. M. Soares, J. O. Pinto, B. K. Bose, L. C. Leite, L. E. da Silva, and M. E. Romero, “Field programmable gate array (fpga) based neural network implementation of stator flux oriented vector control of induction motor drive,” in Industrial Technology, 2006. ICIT 2006. IEEE International Conference on. IEEE, 2006, pp. 31–34.
 [933] D. Sonowal and M. Bhuyan, “Fpga implementation of neural network for linearization of thermistor characteristics,” in 2012 International Conference on Devices, Circuits and Systems (ICDCS), 2012.
 [934] T. WANG and L. WANG, “A modularization hardware implementation approach for artificial neural network,” in 2nd International Conference on Electrical, Computer Engineering and Electronics, 2015.
 [935] J. Wang, S. Yang, B. Deng, X. Wei, and H. Yu, “Multifpga implementation of feedforward network and its performance analysis,” in Control Conference (CCC), 2015 34th Chinese. IEEE, 2015, pp. 3457–3461.
 [936] D. Wang, L. Deng, P. Tang, C. Ma, and J. Pei, “Fpgabased neuromorphic computing system with a scalable routing network,” in 2015 15th NonVolatile Memory Technology Symposium (NVMTS). IEEE, 2015, pp. 1–4.
 [937] E. Won, “A hardware implementation of artificial neural networks using field programmable gate arrays,” Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 581, no. 3, pp. 816–820, 2007.
 [938] A. Youssef, K. Mohammed, and A. Nasar, “A reconfigurable, generic and programmable feed forward neural network implementation in fpga,” in Computer Modelling and Simulation (UKSim), 2012 UKSim 14th International Conference on. IEEE, 2012, pp. 9–13.
 [939] D. Zhang, H. Li, and S. Y. Foo, “A simplified fpga implementation of neural network algorithms integrated with stochastic theory for power electronics applications,” in Industrial Electronics Society, 2005. IECON 2005. 31st Annual Conference of IEEE. IEEE, 2005, pp. 6–pp.
 [940] D. Zhang and H. Li, “A low cost digital implementation of feedforward neural networks applied to a variablespeed wind turbine system,” in Power Electronics Specialists Conference, 2006. PESC’06. 37th IEEE. IEEE, 2006, pp. 1–6.
 [941] J. B. Ahn, “Computation of backpropagation learning algorithm using neuron machine architecture,” in Computational Intelligence, Modelling and Simulation (CIMSim), 2013 Fifth International Conference on. IEEE, 2013, pp. 23–28.
 [942] R. J. Aliaga, R. Gadea, R. J. Colom, J. Cerdá, N. Ferrando, and V. Herrero, “A mixed hardwaresoftware approach to flexible artificial neural network training on fpga,” in Systems, Architectures, Modeling, and Simulation, 2009. SAMOS’09. International Symposium on. IEEE, 2009, pp. 1–8.
 [943] G. Alizadeh, J. Frounchi, M. Baradaran Nia, M. Zarifi, and S. Asgarifar, “An fpga implementation of an artificial neural network for prediction of cetane number,” in Computer and Communication Engineering, 2008. ICCCE 2008. International Conference on. IEEE, 2008, pp. 605–608.
 [944] R. G. Biradar, A. Chatterjee, P. Mishra, and K. George, “Fpga implementation of a multilayer artificial neural network using systemonchip design methodology,” in Cognitive Computing and Information Processing (CCIP), 2015 International Conference on. IEEE, 2015, pp. 1–6.
 [945] M. A. Çavuşlu, C. Karakuzu, S. Şahin, and M. Yakut, “Neural network training based on fpga with floating point number format and it?s performance,” Neural Computing and Applications, vol. 20, no. 2, pp. 195–202, 2011.
 [946] P. O. Domingos, F. M. Silva, and H. C. Neto, “An efficient and scalable architecture for neural networks with backpropagation learning,” in Field Programmable Logic and Applications, 2005. International Conference on. IEEE, 2005, pp. 89–94.
 [947] J. G. Eldredge and B. L. Hutchings, “Density enhancement of a neural network using fpgas and runtime reconfiguration,” in FPGAs for Custom Computing Machines, 1994. Proceedings. IEEE Workshop on. IEEE, 1994, pp. 180–188.
 [948] ——, “Rrann: a hardware implementation of the backpropagation algorithm using reconfigurable fpgas,” in Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on, vol. 4. IEEE, 1994, pp. 2097–2102.
 [949] R. Gadea, J. Cerdá, F. Ballester, and A. Mocholí, “Artificial neural network implementation on a single fpga of a pipelined online backpropagation,” in Proceedings of the 13th international symposium on System synthesis. IEEE Computer Society, 2000, pp. 225–230.
 [950] B. Girau and A. Tisserand, “Online arithmeticbased reprogrammable hardware implementation of multilayer perceptron backpropagation,” in microneuro. IEEE, 1996, p. 168.
 [951] B. Girau, “Onchip learning of fpgainspired neural nets,” in Neural Networks, 2001. Proceedings. IJCNN’01. International Joint Conference on, vol. 1. IEEE, 2001, pp. 222–227.
 [952] R. G. Gironés, R. C. Palero, J. C. Boluda, and A. S. Cortés, “Fpga implementation of a pipelined online backpropagation,” Journal of VLSI signal processing systems for signal, image and video technology, vol. 40, no. 2, pp. 189–213, 2005.
 [953] H. Hikawa, “Frequencybased multilayer neural network with onchip learning and enhanced neuron characteristics,” Neural Networks, IEEE Transactions on, vol. 10, no. 3, pp. 545–553, 1999.
 [954] N. Izeboudjen, A. Farah, H. Bessalah, A. Bouridene, and N. Chikhi, “Towards a platform for fpga implementation of the mlp based back propagation algorithm,” in Computational and Ambient Intelligence. Springer, 2007, pp. 497–505.
 [955] F. Moreno, J. Alarcón, R. Salvador, and T. Riesgo, “Reconfigurable hardware architecture of a shape recognition system based on specialized tiny neural networks with online training,” Industrial Electronics, IEEE Transactions on, vol. 56, no. 8, pp. 3253–3263, 2009.
 [956] M. Moussa, S. Areibi, and K. Nichols, “On the arithmetic precision for implementing backpropagation networks on fpga: a case study,” in FPGA Implementations of Neural Networks. Springer, 2006, pp. 37–61.

[957]
K. R. Nichols, M. A. Moussa, and S. M. Areibi, “Feasibility of floatingpoint arithmetic in fpga based artificial neural networks,” in
In CAINE. Citeseer, 2002.  [958] F. OrtegaZamorano, J. M. Jerez, D. Urda Munoz, R. M. LuqueBaena, and L. Franco, “Efficient implementation of the backpropagation algorithm in fpgas and microcontrollers,” 2015.
 [959] V. Pandya, S. Areibi, and M. Moussa, “A handelc implementation of the backpropagation algorithm on field programmable gate arrays,” in Reconfigurable Computing and FPGAs, 2005. ReConFig 2005. International Conference on. IEEE, 2005, pp. 8–pp.
 [960] S. Pinjare and A. Kumar, “Implementation of neural network back propagation training algorithm on fpga,” International Journal of Computer Applications, vol. 52, no. 6, pp. 1–7, 2012.
 [961] Z. Ruan, J. Han, and Y. Han, “Bp neural network implementation on realtime reconfigurable fpga system for a softsensing process,” in Neural Networks and Brain, 2005. ICNN&B’05. International Conference on, vol. 2. IEEE, 2005, pp. 959–963.
 [962] T. Sangeetha and C. Meenal, “Digital implementation of artificial neural network for function approximation and pressure control applications,” IOSR Journal of Electronics and Communication Engineering (IOSRJECE), vol. 5, no. 5, pp. 34–3, 2013.
 [963] A. W. Savich, M. Moussa, and S. Areibi, “The impact of arithmetic representation on implementing mlpbp on fpgas: A study,” Neural Networks, IEEE Transactions on, vol. 18, no. 1, pp. 240–252, 2007.
 [964] L. Shoushan, C. Yan, X. Wenshang, and Z. Tongjun, “A single layer architecture to fpga implementation of bp artificial neural network,” in Informatics in Control, Automation and Robotics (CAR), 2010 2nd International Asia Conference on, vol. 2. IEEE, 2010, pp. 258–264.
 [965] Y. Sun and A. C. Cheng, “Machine learning onachip: A highperformance lowpower reusable neuron architecture for artificial neural networks in ecg classifications,” Computers in biology and medicine, vol. 42, no. 7, pp. 751–757, 2012.
 [966] G. Acosta and M. Tosini, “A firmware digital neural network for climate prediction applications,” in Intelligent Control, 2001.(ISIC’01). Proceedings of the 2001 IEEE International Symposium on. IEEE, 2001, pp. 127–131.
 [967] Y. Ago, Y. Ito, and K. Nakano, “An fpga implementation for neural networks with the fdfm processor core approach,” International Journal of Parallel, Emergent and Distributed Systems, vol. 28, no. 4, pp. 308–320, 2013.
 [968] R. Aliaga, R. Gadea, R. Colom, J. M. Monzo, C. Lerche, J. D. Martinez, A. Sebastiá, and F. Mateo, “Multiprocessor soc implementation of neural network training on fpga,” in Advances in Electronics and Microelectronics, 2008. ENICS’08. International Conference on. IEEE, 2008, pp. 149–154.
 [969] L. Y. Ann, P. Ehkan, and M. Mashor, “Possibility of hybrid multilayered perceptron neural network realisation on fpga and its challenges,” in Advanced Computer and Communication Engineering Technology. Springer, 2016, pp. 1051–1061.
 [970] M. Bahoura and C.W. Park, “Fpgaimplementation of an adaptive neural network for rf power amplifier modeling,” in New Circuits and Systems Conference (NEWCAS), 2011 IEEE 9th International. IEEE, 2011, pp. 29–32.
 [971] ——, “Fpgaimplementation of highspeed mlp neural network.” in ICECS, 2011, pp. 426–429.
 [972] F. Benrekia, M. Attari, A. Bermak, and K. Belhout, “Fpga implementation of a neural network classifier for gas sensor array applications,” in Systems, Signals and Devices, 2009. SSD’09. 6th International MultiConference on. IEEE, 2009, pp. 1–6.
 [973] J.L. Beuchat, J.O. Haenni, and E. Sanchez, “Hardware reconfigurable neural networks,” in Parallel and Distributed Processing. Springer, 1998, pp. 91–98.
 [974] J.L. Beuchat and E. Sanchez, “A reconfigurable neuroprocessor with onchip pruning,” in ICANN 98. Springer, 1998, pp. 1159–1164.
 [975] ——, “Using online arithmetic and reconfiguration for neuroprocessor implementation,” in International WorkConference on Artificial Neural Networks. Springer, 1999, pp. 129–138.
 [976] N. M. Botros and M. AbdulAziz, “Hardware implementation of an artificial neural network using field programmable gate arrays (fpga’s),” IEEE Transactions on Industrial Electronics, vol. 41, no. 6, pp. 67–665, 1994.
 [977] A. L. Braga, C. H. Llanos, D. Göhringer, J. Obie, J. Becker, and M. Hübner, “Performance, accuracy, power consumption and resource utilization analysis for hardware/software realized artificial neural networks,” in BioInspired Computing: Theories and Applications (BICTA), 2010 IEEE Fifth International Conference on. IEEE, 2010, pp. 1629–1636.
 [978] A. Canas, E. M. Ortigosa, E. Ros, and P. M. Ortigosa, “Fpga implementation of a fully and partially connected mlp,” in FPGA Implementations of Neural Networks. Springer, 2006, pp. 271–296.
 [979] M. B. Carvalho, A. M. Amaral, L. E. da Silva Ramos, C. A. P. da Silva Martins, and P. Ekel, “Artificial neural network engine: Parallel and parameterized architecture implemented in fpga,” in Pattern Recognition and Machine Intelligence. Springer, 2005, pp. 294–299.
 [980] N. Chalhoub, F. Muller, and M. Auguin, “Fpgabased generic neural network architecture,” in Industrial Embedded Systems, 2006. IES’06. International Symposium on. IEEE, 2006, pp. 1–4.
 [981] R. M. da Silva, N. Nedjah, and L. de Macedo Mourelle, “Reconfigurable macbased architecture for parallel hardware implementation on fpgas of artificial neural networks using fractional fixed point representation,” in Artificial Neural Networks–ICANN 2009. Springer, 2009, pp. 475–484.
 [982] B. Denby, P. Garda, B. Granado, C. Kiesling, J.C. Prévotet, and A. Wassatsch, “Fast triggering in highenergy physics experiments using hardware neural networks,” Neural Networks, IEEE Transactions on, vol. 14, no. 5, pp. 1010–1027, 2003.
 [983] A. P. d. A. Ferreira and E. N. d. S. Barros, “A high performance full pipelined arquitecture of mlp neural networks in fpga,” in Electronics, Circuits, and Systems (ICECS), 2010 17th IEEE International Conference on. IEEE, 2010, pp. 742–745.
 [984] D. Ferrer, R. González, R. Fleitas, J. P. Acle, and R. Canetti, “Neurofpgaimplementing artificial neural networks on programmable logic devices,” in Design, Automation and Test in Europe Conference and Exhibition, 2004. Proceedings, vol. 3. IEEE, 2004, pp. 218–223.
 [985] L. Gatet, F. Bony, and H. TapBeteille, “Digital nn implementations in a fpga for distance measurement and surface classification,” in Instrumentation and Measurement Technology Conference, 2009. I2MTC’09. IEEE. IEEE, 2009, pp. 842–845.
 [986] A. Gomperts, A. Ukil, and F. Zurfluh, “Implementation of neural network on parameterized fpga.” in AAAI Spring Symposium: Embedded Reasoning, 2010.
 [987] ——, “Development and implementation of parameterized fpgabased general purpose neural networks for online applications,” Industrial Informatics, IEEE Transactions on, vol. 7, no. 1, pp. 78–89, 2011.
 [988] M. Gorgoń and M. Wrzesiński, “Neural network implementation in reprogrammable fpga devices–an example for mlp,” in Artificial Intelligence and Soft Computing–ICAISC 2006. Springer, 2006, pp. 19–28.
 [989] T. Horita, I. Takanami, M. Akiba, M. Terauchi, and T. Kanno, “An fpgabased multipleweightandneuronfault tolerant digital multilayer perceptron (full version),” in Transactions on Computational Science XXV. Springer, 2015, pp. 148–171.
 [990] Z. Jin and A. C. Cheng, “A selfhealing autonomous neural network hardware for trustworthy biomedical systems,” in FieldProgrammable Technology (FPT), 2011 International Conference on. IEEE, 2011, pp. 1–8.
 [991] F. A. Khan, M. Uppal, W.C. Song, M.J. Kang, and A. M. Mirza, “Fpga implementation of a neural network for character recognition,” in Advances in Neural NetworksISNN 2006. Springer, 2006, pp. 1357–1365.
 [992] Y.C. Kim, D.K. Kang, and T.W. Lee, “Riscbased coprocessor with a dedicated vlsi neural network,” in Electronics, Circuits and Systems, 1998 IEEE International Conference on, vol. 3. IEEE, 1998, pp. 281–283.
 [993] D. Kyoung and K. Jung, “Fullypipelining hardware implementation of neural network for textbased images retrieval,” in Advances in Neural NetworksISNN 2006. Springer, 2006, pp. 1350–1356.
 [994] C. Latino, M. MorenoArmendáriz, M. Hagan et al., “Realizing general mlp networks with minimal fpga resources,” in Neural Networks, 2009. IJCNN 2009. International Joint Conference on. IEEE, 2009, pp. 1722–1729.
 [995] Y. Lee and S.B. Ko, “An fpgabased face detector using neural network and a scalable floating point unit,” in Proceedings of the 5th WSEAS International Conference on Circuits, Systems, Electronics, Control & Signal Processing. World Scientific and Engineering Academy and Society (WSEAS), 2006, pp. 315–320.
 [996] M. A. A. León, A. R. Castro, and R. R. L. Ascencio, “An artificial neural network on a field programmable gate array as a virtual sensor,” in Design of MixedMode Integrated Circuits and Applications, 1999. Third International Workshop on. IEEE, 1999, pp. 114–117.
 [997] Z. Lin, Y. Dong, Y. Li, and T. Watanabe, “A hybrid architecture for efficient fpgabased implementation of multilayer neural network,” in Circuits and Systems (APCCAS), 2010 IEEE Asia Pacific Conference on. IEEE, 2010, pp. 616–619.
 [998] U. Lotrič and P. Bulić, “Logarithmic multiplier in hardware implementation of neural networks,” in Adaptive and Natural Computing Algorithms. Springer, 2011, pp. 158–168.
 [999] N. Nedjah, R. M. da Silva, L. de Macedo Mourelle, and M. V. C. da Silva, “Reconfigurable macbased architecture for parallel hardware implementation on fpgas of artificial neural networks,” in Artificial Neural NetworksICANN 2008. Springer, 2008, pp. 169–178.
 [1000] N. Nedjah, R. M. da Silva, L. Mourelle, and M. V. C. da Silva, “Dynamic macbased architecture of artificial neural networks suitable for hardware implementation on fpgas,” Neurocomputing, vol. 72, no. 10, pp. 2171–2179, 2009.
 [1001] N. Nedjah, R. M. da Silva, and L. de Macedo Mourelle, “Compact yet efficient hardware implementation of artificial neural networks with customized topology,” Expert Systems with Applications, vol. 39, no. 10, pp. 9191–9206, 2012.
 [1002] N. Nedjah and L. de Macedo Mourelle, “A reconfigurable hardware for artificial neural networks,” in Hardware for Soft Computing and Soft Computing for Hardware. Springer, 2014, pp. 59–69.
 [1003] E. M. Ortigosa, A. Cañas, E. Ros, and R. R. Carrillo, “Fpga implementation of a perceptronlike neural network for embedded applications,” in Artificial Neural Nets Problem Solving Methods. Springer, 2003, pp. 1–8.
 [1004] E. M. Ortigosa, P. M. Ortigosa, A. Cañas, E. Ros, R. Agís, and J. Ortega, “Fpga implementation of multilayer perceptrons for speech recognition,” in Field Programmable Logic and Application. Springer, 2003, pp. 1048–1052.
 [1005] E. M. Ortigosa, A. Cañas, E. Ros, P. M. Ortigosa, S. Mota, and J. Díaz, “Hardware description of multilayer perceptrons with different abstraction levels,” Microprocessors and Microsystems, vol. 30, no. 7, pp. 435–444, 2006.
 [1006] E. M. Ortigosa, A. Cañas, R. Rodríguez, J. Díaz, and S. Mota, “Towards an optimal implementation of mlp in fpga,” in Reconfigurable Computing: Architectures and Applications. Springer, 2006, pp. 46–51.
 [1007] A. T. ÖZDEMİR and K. DANIŞMAN, “Fully parallel annbased arrhythmia classifier on a singlechip fpga: Fpaac,” Turkish Journal of Electrical Engineering and Computer Science, vol. 19, no. 4, pp. 667–687, 2011.
 [1008] E. Pasero and M. Perri, “Hwsw codesign of a flexible neural controller through a fpgabased neural network programmed in vhdl,” in Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on, vol. 4. IEEE, 2004, pp. 3161–3165.
 [1009] J. C. Patra, H. Y. Lee, P. K. Meher, and E. L. Ang, “Field programmable gate array implementation of a neural networkbased intelligent sensor system,” in Control, Automation, Robotics and Vision, 2006. ICARCV’06. 9th International Conference on. IEEE, 2006, pp. 1–5.
 [1010] A. PerezGarcia, G. TornezXavier, L. FloresNava, F. GomezCastaneda, and J. MorenoCadenas, “Multilayer perceptron network with integrated training algorithm in fpga,” in Electrical Engineering, Computing Science and Automatic Control (CCE), 2014 11th International Conference on. IEEE, 2014, pp. 1–6.
 [1011] W. Qinruo, Y. Bo, X. Yun, and L. Bingru, “The hardware structure design of perceptron with fpga implementation,” in Systems, Man and Cybernetics, 2003. IEEE International Conference on, vol. 1. IEEE, 2003, pp. 762–767.
 [1012] S. Rani and P. Kanagasabapathy, “Multilayer perceptron neural network architecture using vhdl with combinational logic sigmoid function,” in Signal Processing, Communications and Networking, 2007. ICSCN’07. International Conference on. IEEE, 2007, pp. 404–409.
 [1013] R. Rezvani, M. Katiraee, A. H. Jamalian, S. Mehrabi, and A. Vezvaei, “A new method for hardware design of multilayer perceptron neural networks with online training,” in Cognitive Informatics & Cognitive Computing (ICCI* CC), 2012 IEEE 11th International Conference on. IEEE, 2012, pp. 527–534.
 [1014] J. Skodzik, V. Altmann, B. Wagner, P. Danielis, and D. Timmermann, “A highly integrable fpgabased runtimeconfigurable multilayer perceptron,” in Advanced Information Networking and Applications (AINA), 2013 IEEE 27th International Conference on. IEEE, 2013, pp. 429–436.
 [1015] M. M. Syiam, H. Klash, I. Mahmoud, and S. Haggag, “Hardware implementation of neural network on fpga for accidents diagnosis of the multipurpose research reactor of egypt,” in Microelectronics, 2003. ICM 2003. Proceedings of the 15th International Conference on. IEEE, 2003, pp. 326–329.
 [1016] Y. Taright and M. Hubin, “Fpga implementation of a multilayer perceptron neural network using vhdl,” in Signal Processing Proceedings, 1998. ICSP’98. 1998 Fourth International Conference on, vol. 2. IEEE, 1998, pp. 1311–1314.
 [1017] S. Tatikonda and P. Agarwal, “Field programmable gate array (fpga) based neural network implementation of motion control and fault diagnosis of induction motor drive,” in Industrial Technology, 2008. ICIT 2008. IEEE International Conference on. IEEE, 2008, pp. 1–6.
 [1018] S. Vitabile, V. Conti, F. Gennaro, and F. Sorbello, “Efficient mlp digital implementation on fpga,” in Digital System Design, 2005. Proceedings. 8th Euromicro Conference on. IEEE, 2005, pp. 218–222.
 [1019] P. Škoda, T. Lipić, Á. Srp, B. M. Rogina, K. Skala, and F. Vajda, “Implementation framework for artificial neural networks on fpga,” in MIPRO, 2011 Proceedings of the 34th International Convention. IEEE, 2011, pp. 274–278.
 [1020] D. F. Wolf, R. A. Romero, and E. Marques, “Using embedded processors in hardware models of artificial neural networks,” in V Simposio Brasileiro de automação inteligente, Brasil, 2001.
 [1021] D. F. Wolf, G. Faria, R. A. Romero, E. Marques, M. A. Teixeira, A. A. Ribeiro, L. C. Fernandes, J. M. Scatena, and R. Mezencio, “A pipeline hardware implementation for an artificial neural network,” in Congresso da Sociedade Brasileira de Computacão–SBC, Encontro Nacional de Inteligência Artificial–ENIA, 2001, pp. 1528–1536.
 [1022] I. G. Yu, Y. M. Lee, S. W. Yeo, and C. H. Lee, “Design on supervised/unsupervised learning reconfigurable digital neural network structure,” in PRICAI 2006: Trends in Artificial Intelligence. Springer, 2006, pp. 1201–1205.
 [1023] J. Zhu, G. J. Milne, and B. Gunther, “Towards an fpga based reconfigurable computing environment for neural network implementations,” 1999.
 [1024] A. Basu, S. Shuo, H. Zhou, M. H. Lim, and G.B. Huang, “Silicon spiking neurons for hardware implementation of extreme learning machines,” Neurocomputing, vol. 102, pp. 125–134, 2013.
 [1025] C. Merkel and D. Kudithipudi, “Neuromemristive extreme learning machines for pattern classification,” in VLSI (ISVLSI), 2014 IEEE Computer Society Annual Symposium on. IEEE, 2014, pp. 77–82.
 [1026] ——, “A currentmode cmos/memristor hybrid implementation of an extreme learning machine,” in Proceedings of the 24th edition of the great lakes symposium on VLSI. ACM, 2014, pp. 241–242.
 [1027] M. Suri and V. Parmar, “Exploiting intrinsic variability of filamentary resistive memory for extreme learning machine architectures,” Nanotechnology, IEEE Transactions on, vol. 14, no. 6, pp. 963–968, 2015.
 [1028] E. Yao, S. Hussain, A. Basu, and G.B. Huang, “Computation using mismatch: Neuromorphic extreme learning machines,” in Biomedical Circuits and Systems Conference (BioCAS), 2013 IEEE. IEEE, 2013, pp. 294–297.
 [1029] S. Decherchi, P. Gastaldo, A. Leoncini, and R. Zunino, “Efficient digital implementation of extreme learning machines for classification,” Circuits and Systems II: Express Briefs, IEEE Transactions on, vol. 59, no. 8, pp. 496–500, 2012.
 [1030] E. Gatt, J. Micallef, and E. Chilton, “An analog vlsi timedelay neural network implementation for phoneme recognition,” Cellular Neural Networks and their Applications (CNNA 2000), p. 315, 2000.
 [1031] J. Van der Spiegel, C. Donham, R. EtienneCummings, S. Fernando, P. Mueller, and D. Blackman, “Large scale analog neural computer with programmable architecture and programmable time constants for temporal pattern analysis,” in Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on, vol. 3. IEEE, 1994, pp. 1830–1835.
 [1032] M. Bahoura and C.W. Park, “Fpgaimplementation of dynamic time delay neural network for power amplifier behavioral modeling,” Analog Integrated Circuits and Signal Processing, vol. 73, no. 3, pp. 819–828, 2012.
 [1033] M. Bahoura, “Fpga implementation of highspeed neural network for power amplifier behavioral modeling,” Analog Integrated Circuits and Signal Processing, vol. 79, no. 3, pp. 507–527, 2014.
 [1034] R. S. N. Ntouné, M. Bahoura, and C.W. Park, “Fpgaimplementation of pipelined neural network for power amplifier modeling,” in New Circuits and Systems Conference (NEWCAS), 2012 IEEE 10th International. IEEE, 2012, pp. 109–112.
 [1035] R. Woodburn, A. Astaras, R. Dalzell, A. F. Murray, and D. K. McNeill, “Computing with uncertainty in probabilistic neural networks on silicon,” in Proceedings of Symposium on neural computation, 2000.
 [1036] A. Serb, J. Bill, A. Khiat, R. Berdan, R. Legenstein, and T. Prodromakis, “Unsupervised learning in probabilistic neural networks with multistate metaloxide memristive synapses,” Nature communications, vol. 7, p. 12611, 2016.
 [1037] N. Aibe, M. Yasunaga, I. Yoshihara, and J. H. Kim, “A probabilistic neural network hardware system using a learningparameter parallel architecture,” in Neural Networks, 2002. IJCNN’02. Proceedings of the 2002 International Joint Conference on, vol. 3. IEEE, 2002, pp. 2270–2275.
 [1038] N. Aibe, R. Mizuno, M. Nakamura, M. Yasunaga, and I. Yoshihara, “Performance evaluation system for probabilistic neural network hardware,” Artificial Life and Robotics, vol. 8, no. 2, pp. 208–213, 2004.
 [1039] N. Bu, T. Hamamoto, T. Tsuji, and O. Fukuda, “Fpga implementation of a probabilistic neural network for a bioelectric human interface,” in Circuits and Systems, 2004. MWSCAS’04. The 2004 47th Midwest Symposium on, vol. 3. IEEE, 2004, pp. iii–29.
 [1040] M. Figueiredo, C. Gloster et al., “Implementation of a probabilistic neural network for multispectral image classification on an fpga based custom computing machine,” in Neural Networks, 1998. Proceedings. Vth Brazilian Symposium on. IEEE, 1998, pp. 174–179.
 [1041] G. Minchin and A. Zaknich, “A design for fpga implementation of the probabilistic neural network,” in Neural Information Processing, 1999. Proceedings. ICONIP’99. 6th International Conference on, vol. 2. IEEE, 1999, pp. 556–559.
 [1042] F. Zhou, J. Liu, Y. Yu, X. Tian, H. Liu, Y. Hao, S. Zhang, W. Chen, J. Dai, and X. Zheng, “Fieldprogrammable gate array implementation of a probabilistic neural network for motor cortical decoding in rats,” Journal of neuroscience methods, vol. 185, no. 2, pp. 299–306, 2010.
 [1043] X. Zhu and Y. Chen, “Improved fpga implementation of probabilistic neural network for neural decoding,” in Apperceiving Computing and Intelligence Analysis (ICACIA), 2010 International Conference on. IEEE, 2010, pp. 198–202.
 [1044] R. Dogaru, A. Murgan, S. Ortmann, and M. Glesner, “A modified rbf neural network for efficient currentmode vlsi implementation,” in Microelectronics for Neural Networks, 1996., Proceedings of Fifth International Conference on. IEEE, 1996, pp. 265–270.
 [1045] J. Sitte, L. Zhang, and U. Rueckert, “Characterization of analog local cluster neural network hardware for control,” Neural Networks, IEEE Transactions on, vol. 18, no. 4, pp. 1242–1253, 2007.
 [1046] M. Verleysen, P. Thissen, J.L. Voz, and J. Madrenas, “An analog processor architecture for a neural network classifier,” Micro, IEEE, vol. 14, no. 3, pp. 16–28, 1994.
 [1047] P. Maffezzoni and P. Gubian, “Vlsi design of radial functions hardware generator for neural computations,” in Microelectronics for Neural Networks and Fuzzy Systems, 1994., Proceedings of the Fourth International Conference on. IEEE, 1994, pp. 252–259.
 [1048] M. Suri, V. Parmar, A. Singla, R. Malviya, and S. Nair, “Neuromorphic hardware accelerated adaptive authentication system,” in Computational Intelligence, 2015 IEEE Symposium Series on. IEEE, 2015, pp. 1206–1213.
 [1049] H. Zhuang, K.S. Low, and W.Y. Yau, “A pulsed neural network with onchip learning and its practical applications,” Industrial Electronics, IEEE Transactions on, vol. 54, no. 1, pp. 34–42, 2007.
 [1050] S. Halgamuge, W. Poechmueller, C. Grimm, and M. Glesner, “Fuzzy interpretable dynamically developing neural networks with fpga based implementation,” in Microelectronics for Neural Networks and Fuzzy Systems, 1994., Proceedings of the Fourth International Conference on. IEEE, 1994, pp. 226–234.
 [1051] J.S. Kim and S. Jung, “Hardware implementation of a neural network controller on fpga for a humanoid robot arm,” in Advanced Intelligent Mechatronics, 2008. AIM 2008. IEEE/ASME International Conference on. IEEE, 2008, pp. 1164–1169.
 [1052] J. Kim and S. Jung, “Implementation of the rbf neural chip with the backpropagation algorithm for online learning,” Applied Soft Computing, vol. 29, pp. 233–244, 2015.
 [1053] M. Porrmann, U. Witkowski, H. Kalte, and U. Rückert, “Implementation of artificial neural networks on a reconfigurable hardware accelerator,” in euromicropdp. IEEE, 2002, p. 0243.
 [1054] S. Chakradhar, M. Sankaradas, V. Jakkula, and S. Cadambi, “A dynamically configurable coprocessor for convolutional neural networks,” in ACM SIGARCH Computer Architecture News, vol. 38, no. 3. ACM, 2010, pp. 247–257.
 [1055] T. Chen, S. Zhang, S. Liu, Z. Du, T. Luo, Y. Gao, J. Liu, D. Wang, C. Wu, N. Sun et al., “A smallfootprint accelerator for largescale neural networks,” ACM Transactions on Computer Systems (TOCS), vol. 33, no. 2, p. 6, 2015.
 [1056] Y.H. Chen, T. Krishna, J. S. Emer, and V. Sze, “Eyeriss: An energyefficient reconfigurable accelerator for deep convolutional neural networks,” IEEE Journal of SolidState Circuits, 2016.
 [1057] F. Conti and L. Benini, “A ultralowenergy convolution engine for fast braininspired vision in multicore clusters,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015. IEEE, 2015, pp. 683–688.
 [1058] D. Kim, J. Kung, S. Chai, S. Yalamanchili, and S. Mukhopadhyay, “Neurocube: A programmable digital neuromorphic architecture with highdensity 3d memory,” in Computer Architecture (ISCA), 2016 ACM/IEEE 43rd Annual International Symposium on. IEEE, 2016, pp. 380–392.
 [1059] O. Nomura, T. Morie, K. Korekado, M. Matsugu, and A. Iwata, “A convolutional neural network vlsi architecture using thresholding and weight decomposition,” in KnowledgeBased Intelligent Information and Engineering Systems. Springer, 2004, pp. 995–1001.
 [1060] O. Nomura, T. Morie, M. Matsugu, and A. Iwata, “A convolutional neural network vlsi architecture using sorting model for reducing multiplyandaccumulation operations,” in Advances in Natural Computation. Springer, 2005, pp. 1006–1014.
 [1061] M. Kang, S. K. Gonugondla, M.S. Keel, and N. R. Shanbhag, “An energyefficient memorybased highthroughput vlsi architecture for convolutional networks,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEE, 2015, pp. 1037–1041.
 [1062] K. Korekado, T. Morie, O. Nomura, H. Ando, T. Nakano, M. Matsugu, and A. Iwata, “A convolutional neural network vlsi for image recognition using merged/mixed analogdigital architecture,” in KnowledgeBased Intelligent Information and Engineering Systems. Springer, 2003, pp. 169–176.
 [1063] D. Garbin, O. Bichler, E. Vianello, Q. Rafhay, C. Gamrat, L. Perniola, G. Ghibaudo, and B. DeSalvo, “Variabilitytolerant convolutional neural network for pattern recognition applications based on oxram synapses,” in Electron Devices Meeting (IEDM), 2014 IEEE International. IEEE, 2014, pp. 28–4.
 [1064] D. Garbin, E. Vianello, O. Bichler, Q. Rafhay, C. Gamrat, G. Ghibaudo, B. DeSalvo, and L. Perniola, “Hfo?based oxram devices as synapses for convolutional neural networks,” 2015.
 [1065] D. Garbin, E. Vianello, O. Bichler, M. Azzaz, Q. Rafhay, P. Candelier, C. Gamrat, G. Ghibaudo, B. DeSalvo, and L. Perniola, “On the impact of oxrambased synapses variability on convolutional neural networks performance,” in Nanoscale Architectures (NANOARCH), 2015 IEEE/ACM International Symposium on. IEEE, 2015, pp. 193–198.
 [1066] J. Chung and T. Shin, “Simplifying deep neural networks for neuromorphic architectures,” in Design Automation Conference (DAC), 2016 53nd ACM/EDAC/IEEE. IEEE, 2016, pp. 1–6.
 [1067] C. Farabet, C. Poulet, J. Y. Han, and Y. LeCun, “Cnp: An fpgabased processor for convolutional networks,” in Field Programmable Logic and Applications, 2009. FPL 2009. International Conference on. IEEE, 2009, pp. 32–37.
 [1068] C. Farabet, Y. LeCun, K. Kavukcuoglu, E. Culurciello, B. Martini, P. Akselrod, and S. Talay, “Largescale fpgabased convolutional networks,” Machine Learning on Very Large Data Sets, vol. 1, 2011.
 [1069] N. Li, S. Takaki, Y. Tomiokay, and H. Kitazawa, “A multistage dataflow implementation of a deep convolutional neural network based on fpga for highspeed object recognition,” in 2016 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI). IEEE, 2016, pp. 165–168.
 [1070] M. Motamedi, P. Gysel, V. Akella, and S. Ghiasi, “Design space exploration of fpgabased deep convolutional neural networks,” in 2016 21st Asia and South Pacific Design Automation Conference (ASPDAC). IEEE, 2016, pp. 575–580.
 [1071] Y. Qiao, J. Shen, T. Xiao, Q. Yang, M. Wen, and C. Zhang, “Fpgaaccelerated deep convolutional neural networks for high throughput and energy efficiency,” 2016.
 [1072] J. Qiu, J. Wang, S. Yao, K. Guo, B. Li, E. Zhou, J. Yu, T. Tang, N. Xu, S. Song et al., “Going deeper with embedded fpga platform for convolutional neural network,” in Proceedings of the 2016 ACM/SIGDA International Symposium on FieldProgrammable Gate Arrays. ACM, 2016, pp. 26–35.
 [1073] T. Shin, Y. Kang, S. Yang, S. Kim, and J. Chung, “Live demonstration: Realtime image classification on a neuromorphic computing system with zero offchip memory access,” in Circuits and Systems (ISCAS), 2016 IEEE International Symposium on. IEEE, 2016, pp. 449–449.
 [1074] N. Suda, V. Chandra, G. Dasika, A. Mohanty, Y. Ma, S. Vrudhula, J.s. Seo, and Y. Cao, “Throughputoptimized openclbased fpga accelerator for largescale convolutional neural networks,” in Proceedings of the 2016 ACM/SIGDA International Symposium on FieldProgrammable Gate Arrays. ACM, 2016, pp. 16–25.
 [1075] C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing fpgabased accelerator design for deep convolutional neural networks,” in Proceedings of the 2015 ACM/SIGDA International Symposium on FieldProgrammable Gate Arrays. ACM, 2015, pp. 161–170.
 [1076] S. K. Boddhu, J. C. Gallagher, and S. Vigraham, “A reconfigurable analog neural network for evolvable hardware applications: Intrinsic evolution and extrinsic verification,” in Evolutionary Computation, 2006. CEC 2006. IEEE Congress on. IEEE, 2006, pp. 3145–3152.
 [1077] M. Brownlow, L. Tarassenko, and A. Murray, “Results from pulsestream vlsi neural network devices,” in VLSI for Artificial Intelligence and Neural Networks. Springer, 1991, pp. 215–224.
 [1078] G. Cauwenberghs, “A learning analog neural network chip with continuoustime recurrent dynamics,” Advances in Neural Information Processing Systems, pp. 858–858, 1994.
 [1079] ——, “An analog vlsi recurrent neural network learning a continuoustime trajectory,” Neural Networks, IEEE Transactions on, vol. 7, no. 2, pp. 346–361, 1996.
 [1080] ——, “Adaptation, learning and storage in analog vlsi,” in ASIC Conference and Exhibit, 1996. Proceedings., Ninth Annual IEEE International. IEEE, 1996, pp. 273–278.
 [1081] W. A. Fisher, R. J. Fujimoto, and R. C. Smithson, “A programmable analog neural network processor,” Neural Networks, IEEE Transactions on, vol. 2, no. 2, pp. 222–229, 1991.
 [1082] J. C. Gallagher and J. M. Fiore, “Continuous time recurrent neural networks: a paradigm for evolvable analog controller circuits,” in The Proceedings of the National Aerospace and Electronics Conference. Citeseer, 2000, pp. 299–304.
 [1083] J. C. Gallagher, “A neuromorphic paradigm for extrinsically evolved hybrid analog/digital device controllers: initial explorations,” in Evolvable Hardware, 2001. Proceedings. The Third NASA/DoD Workshop on. IEEE, 2001, pp. 48–55.
 [1084] J. C. Gallagher, S. K. Boddhu, and S. Vigraham, “A reconfigurable continuous time recurrent neural network for evolvable hardware applications,” in Evolutionary Computation, 2005. The 2005 IEEE Congress on, vol. 3. IEEE, 2005, pp. 2461–2468.
 [1085] J. C. Gallagher, K. S. Deshpande, and M. Wolff, “An adaptive neuromorphic chip for augmentative control of air breathing jet turbine engines,” in Evolutionary Computation, 2008. CEC 2008.(IEEE World Congress on Computational Intelligence). IEEE Congress on. IEEE, 2008, pp. 2644–2650.
 [1086] P. Hasler and L. Akers, “Implementation of analog neural networks,” in Computers and Communications, 1991. Conference Proceedings., Tenth Annual International Phoenix Conference on. IEEE, 1991, pp. 32–38.
 [1087] G. Kothapalli, “An analogue recurrent neural network for trajectory learning and other industrial applications,” in Industrial Informatics, 2005. INDIN’05. 2005 3rd IEEE International Conference on. IEEE, 2005, pp. 462–467.
 [1088] J. A. Lansner and T. Lehmann, “An analog cmos chip set for neural networks with arbitrary topologies,” Neural Networks, IEEE Transactions on, vol. 4, no. 3, pp. 441–444, 1993.
 [1089] T. Morie and Y. Amemiya, “An allanalog expandable neural network lsi with onchip backpropagation learning,” SolidState Circuits, IEEE Journal of, vol. 29, no. 9, pp. 1086–1093, 1994.
 [1090] F. Salam, N. Khachab, M. Ismail, and Y. Wang, “An analog mos implementation of the synaptic weights for feedback neural nets,” in Circuits and Systems, 1989., IEEE International Symposium on. IEEE, 1989, pp. 1223–1226.
 [1091] F. Salam and Y. Wang, “A learning algorithm for feedback neural network chips,” in Circuits and Systems, 1991., IEEE International Sympoisum on. IEEE, 1991, pp. 1377–1379.
 [1092] J. Schemmel, K. Meier, and F. Schürmann, “A vlsi implementation of an analog neural network suited for genetic algorithms,” in Evolvable Systems: From Biology to Hardware. Springer, 2001, pp. 50–61.
 [1093] A. THAKOOR, S. EBERHARDT, and T. Daud, “Electronic neural network for dynamic resource allocation,” in AIAA Computing in Aerospace… Conference: A Collection of Technical Papers, vol. 8. American Institute of Aeronautics and Astronautics, 1991, p. 339.
 [1094] Y. Ota and B. M. Wilamowski, “Cmos implementation of a pulsecoded neural network with a current controlled oscillator,” in IEEE International Symposium on Circuits and Systems, 1996, pp. III–410.
 [1095] B. Girau, “Digital hardware implementation of 2d compatible neural networks,” in Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEEINNSENNS International Joint Conference on, vol. 3. IEEE, 2000, pp. 506–511.
 [1096] V. Gupta, K. Khare, and R. Singh, “Fpga design and implementation issues of artificial neural network based pid controllers,” in Advances in Recent Technologies in Communication and Computing, 2009. ARTCom’09. International Conference on. IEEE, 2009, pp. 860–862.
 [1097] S. Li, C. Wu, H. Li, B. Li, Y. Wang, and Q. Qiu, “Fpga acceleration of recurrent neural network based language model,” in FieldProgrammable Custom Computing Machines (FCCM), 2015 IEEE 23rd Annual International Symposium on. IEEE, 2015, pp. 111–118.
 [1098] C.J. Lin and C.Y. Lee, “Fpga implementation of a recurrent neural fuzzy network with onchip learning for prediction and identification applications,” Journal of Information Science and Engineering, vol. 25, no. 2, pp. 575–589, 2009.
 [1099] Y. Maeda and M. Wakamura, “Simultaneous perturbation learning rule for recurrent neural networks and its fpga implementation,” Neural Networks, IEEE Transactions on, vol. 16, no. 6, pp. 1664–1672, 2005.
 [1100] A. Polepalli, N. Soures, and D. Kudithipudi, “Digital neuromorphic design of a liquid state machine for realtime processing,” in Rebooting Computing (ICRC), IEEE International Conference on. IEEE, 2016, pp. 1–8.
 [1101] A. Polepalli and D. Kudithipudi, “Reconfigurable digital design of a liquid state machine for spatiotemporal data,” in Proceedings of the 3rd ACM International Conference on Nanoscale Computing and Communication. ACM, 2016, p. 15.
 [1102] P. Petre and J. CruzAlbrecht, “Neuromorphic mixedsignal circuitry for asynchronous pulse processing,” in Rebooting Computing (ICRC), IEEE International Conference on. IEEE, 2016, pp. 1–4.
 [1103] D. Kudithipudi, Q. Saleh, C. Merkel, J. Thesing, and B. Wysocki, “Design and analysis of a neuromemristive reservoir computing architecture for biosignal processing,” Frontiers in Neuroscience, vol. 9, p. 502, 2015.
 [1104] B. Schrauwen, M. D?Haene, D. Verstraeten, and J. Van Campenhout, “Compact hardware liquid state machines on fpga for realtime speech recognition,” Neural networks, vol. 21, no. 2, pp. 511–523, 2008.
 [1105] Q. Wang, Y. Li, and P. Li, “Liquid state machine based pattern recognition on fpga with firingactivity dependent power gating and approximate computing,” in Circuits and Systems (ISCAS), 2016 IEEE International Symposium on. IEEE, 2016, pp. 361–364.
 [1106] Y. Yi, Y. Liao, B. Wang, X. Fu, F. Shen, H. Hou, and L. Liu, “Fpga based spiketime dependent encoder and reservoir design in neuromorphic computing processors,” Microprocessors and Microsystems, 2016.
 [1107] P. Hylander, J. Meader, and E. Frie, “Vlsi implementation of pulse coded winner take all networks,” in Circuits and Systems, 1993., Proceedings of the 36th Midwest Symposium on. IEEE, 1993, pp. 758–761.
 [1108] E. Neftci, E. Chicca, M. Cook, G. Indiveri, and R. Douglas, “Statedependent sensory processing in networks of vlsi spiking neurons,” in Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on. IEEE, 2010, pp. 2789–2792.
 [1109] E. Neftci and G. Indiveri, “A device mismatch compensation method for vlsi neural networks,” in Biomedical Circuits and Systems Conference (BioCAS), 2010 IEEE. IEEE, 2010, pp. 262–265.
 [1110] T. Kamio, H. Adachi, H. Ninomiya, and H. Asai, “A design method of dwt analog neuro chip for vlsi implementation,” in Instrumentation and Measurement Technology Conference, 1997. IMTC/97. Proceedings. Sensing, Processing, Networking., IEEE, vol. 2. IEEE, 1997, pp. 1210–1214.
 [1111] N. I. Khachab and M. Ismail, “A new continuoustime mos implementation of feedback neural networks,” in Circuits and Systems, 1989., Proceedings of the 32nd Midwest Symposium on. IEEE, 1989, pp. 221–224.
 [1112] B. W. Lee, J.C. Lee, and B. J. Sheu, “Vlsi image processor using analog programmable synapses and neurons,” in Neural Networks, 1990., 1990 IJCNN International Joint Conference on. IEEE, 1990, pp. 575–580.
 [1113] B. LinaresBarranco, E. SanchezSinencio, A. RodriguezVazquez, and J. L. Huertas, “A modular tmode design approach for analog neural network hardware implementations,” SolidState Circuits, IEEE Journal of, vol. 27, no. 5, pp. 701–713, 1992.
 [1114] B. LinaresBarranco, E. SanchezSinencio, A. RodriguezVazquez, and J. Huertas, “Modular analog continuoustime vlsi neural networks with onchip hebbian learning and analog storage,” in Circuits and Systems, 1992. ISCAS’92. Proceedings., 1992 IEEE International Symposium on, vol. 3. IEEE, 1992, pp. 1533–1536.
 [1115] J. J. Paulos and P. W. Hollis, “Neural networks using analog multipliers,” in Circuits and Systems, 1988., IEEE International Symposium on. IEEE, 1988, pp. 499–502.
 [1116] M. Verleysen, B. Sirletti, A. M. Vandemeulebroecke, and P. G. Jespers, “Neural networks for highstorage contentaddressable memory: Vlsi circuit and learning algorithm,” SolidState Circuits, IEEE Journal of, vol. 24, no. 3, pp. 562–569, 1989.
 [1117] H.K. Yang, E. ElMasry et al., “A cmos currentmode pwm technique for analog neural network implementations,” in Circuits and Systems, 1994. ISCAS’94., 1994 IEEE International Symposium on, vol. 6. IEEE, 1994, pp. 355–358.
 [1118] P. Alla, G. Dreyfus, J. Gascuel, A. Johannet, L. Personnaz, J. Roman, and M. Weinfeld, Silicon integration of learning algorithms and other autoadaptive properties in a digital feedback neural network. Springer, 1991.
 [1119] K. V. Asari and C. Eswaran, “Systolic array implementation of artificial neural networks,” Microprocessors and Microsystems, vol. 18, no. 8, pp. 481–488, 1994.
 [1120] F. Blayo and P. Hurat, “A reconfigurable wsi neural network,” in Wafer Scale Integration, 1989. Proceedings.,[1st] International Conference on. IEEE, 1989, pp. 141–150.
 [1121] W. Fornaciari and F. Salice, “An automatic vlsi implementation of hopfield anns,” in Circuits and Systems, 1994., Proceedings of the 37th Midwest Symposium on, vol. 1. IEEE, 1994, pp. 499–502.
 [1122] A. Johannet, L. Personnaz, G. Dreyfus, J.D. Gascuel, and M. Weinfeld, “Specification and implementation of a digital hopfieldtype associative memory with onchip training,” Neural Networks, IEEE Transactions on, vol. 3, no. 4, pp. 529–539, 1992.
 [1123] C. Lehmann, M. Viredaz, and F. Blayo, “A generic systolic array building block for neural networks with onchip learning,” Neural Networks, IEEE Transactions on, vol. 4, no. 3, pp. 400–407, 1993.
 [1124] A. Masaki, Y. Hirai, and M. Yamada, “Neural networks in cmos: a case study,” Circuits and Devices Magazine, IEEE, vol. 6, no. 4, pp. 12–17, 1990.
 [1125] D. E. Van Den Bout and T. K. Miller III, “A digital architecture employing stochasticism for the simulation of hopfield neural nets,” Circuits and Systems, IEEE Transactions on, vol. 36, no. 5, pp. 732–738, 1989.
 [1126] M. Weinfeld, “A fully digital integrated cmos hopfield network including the learning algorithm,” in VLSI for Artificial Intelligence. Springer, 1989, pp. 169–178.
 [1127] W. Wike, D. Van den Bout, and T. Miller III, “The vlsi implementation of stonn,” in Neural Networks, 1990., 1990 IJCNN International Joint Conference on. IEEE, 1990, pp. 593–598.
 [1128] M. Yasunaga, N. Masuda, M. Yagyu, M. Asai, K. Shibata, M. Ooyama, M. Yamada, T. Sakaguchi, and M. Hashimoto, “A selflearning digital neural network using waferscale lsi,” SolidState Circuits, IEEE Journal of, vol. 28, no. 2, pp. 106–114, 1993.
 [1129] J. Tomberg, T. Ritoniemi, K. Kaski, and H. Tenhunen, “Fully digital neural network implementation based on pulse density modulation,” in Custom Integrated Circuits Conference, 1989., Proceedings of the IEEE 1989. IEEE, 1989, pp. 12–7.
 [1130] J. E. Tomberg and K. K. Kaski, “Pulsedensity modulation technique in vlsi implementations of neural network algorithms,” SolidState Circuits, IEEE Journal of, vol. 25, no. 5, pp. 1277–1286, 1990.
 [1131] R. Aibara, Y. Mitsui, and T. Ae, “A cmos chip design of binary neural network with delayed synapses,” in Circuits and Systems, 1991., IEEE International Sympoisum on. IEEE, 1991, pp. 1307–1310.
 [1132] L. Chen, M. Wedlake, G. Deliyannides, and H. Kwok, “Hybrid architecture for analogue neural network and its circuit implementation,” in Circuits, Devices and Systems, IEE Proceedings, vol. 143, no. 2. IET, 1996, pp. 123–128.
 [1133] J. E. Hansen, J. Skelton, and D. Allstot, “A timemultiplexed switchedcapacitor circuit for neural network applications,” in Circuits and Systems, 1989., IEEE International Symposium on. IEEE, 1989, pp. 2177–2180.
 [1134] P. W. Hollis and J. J. Paulos, “Artificial neural networks using mos analog multipliers,” SolidState Circuits, IEEE Journal of, vol. 25, no. 3, pp. 849–855, 1990.
 [1135] B. W. Lee and B. J. Sheu, “Hardware annealing in electronic neural networks,” Circuits and Systems, IEEE Transactions on, vol. 38, no. 1, pp. 134–137, 1991.
 [1136] B. Maundy and E. ElMasry, “Pulse arithmetic in switched capacitor neural networks,” in Circuits and Systems, 1990., Proceedings of the 33rd Midwest Symposium on. IEEE, 1990, pp. 285–288.
 [1137] A. Moopenn, T. Duong, and A. Thakoor, “Digitalanalog hybrid synapse chips for electronic neural networks,” in Advances in Neural Information Processing Systems, 1990, pp. 769–776.

[1138]
D. Tank and J. J. Hopfield, “Simple’neural’optimization networks: An a/d converter, signal decision circuit, and a linear programming circuit,”
Circuits and Systems, IEEE Transactions on, vol. 33, no. 5, pp. 533–541, 1986.  [1139] D. Abramson, K. Smith, P. Logothetis, and D. Duke, “Fpga based implementation of a hopfield neural network for solving constraint satisfaction problems,” in Euromicro Conference, 1998. Proceedings. 24th, vol. 2. IEEE, 1998, pp. 688–693.
 [1140] K. K. Likharev, “Neuromorphic cmol circuits,” in Nanotechnology, 2003. IEEENANO 2003. 2003 Third IEEE Conference on, vol. 1. IEEE, 2003, pp. 339–342.
 [1141] M. Atencia, H. Boumeridja, G. Joya, F. GarcíaLagos, and F. Sandoval, “Fpga implementation of a systems identification module based upon hopfield networks,” Neurocomputing, vol. 70, no. 16, pp. 2828–2835, 2007.
 [1142] H. Boumeridja, M. Atencia, G. Joya, and F. Sandoval, “Fpga implementation of hopfield networks for systems identification,” in International WorkConference on Artificial Neural Networks. Springer, 2005, pp. 582–589.
 [1143] V. De Florio, G. Deconinck, and R. Belmans, “A massively parallel architecture for hopfieldtype neural network computers,” in International Conference on Massively Parallel Computing Systems (MPCS?02), 2002.
 [1144] M. Gschwind, V. Salapura, and O. Maischberger, “A generic building block for hopfield neural networks with onchip learning,” in IEEE International Symposium on Circuits and Systems, Atlanta, GA. Citeseer, 1996.
 [1145] S. M. Saif, H. M. Abbas, and S. M. Nassar, “An fpga implementation of a competitive hopfield neural network for use in histogram equalization,” in Neural Networks, 2006. IJCNN’06. International Joint Conference on. IEEE, 2006, pp. 2815–2822.
 [1146] M. Stepanova, F. Lin, and V. C.L. Lin, “A hopfield neural classifier and its fpga implementation for identification of symmetrically structured dna motifs,” The Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology, vol. 48, no. 3, pp. 239–254, 2007.
 [1147] A. Varma et al., “A novel digital neural network for the travelling salesman problem,” in Neural Information Processing, 2002. ICONIP’02. Proceedings of the 9th International Conference on, vol. 3. IEEE, 2002, pp. 1320–1324.
 [1148] M. Wakamura and Y. Maeda, “Fpga implementation of hopfield neural network via simultaneous perturbation rule,” in SICE 2003 Annual Conference (IEEE Cat. No. 03TH8734), 2003.
 [1149] S. Duan, Z. Dong, X. Hu, L. Wang, and H. Li, “Smallworld hopfield neural networks with weight salience priority and memristor synapses for digit recognition,” Neural Computing and Applications, pp. 1–8, 2015.
 [1150] X. Guo, F. MerrikhBayat, L. Gao, B. D. Hoskins, F. Alibart, B. LinaresBarranco, L. Theogarajan, C. Teuscher, and D. B. Strukov, “Modeling and experimental demonstration of a hopfield network analogtodigital converter with hybrid cmos/memristor circuits,” Frontiers in neuroscience, vol. 9, 2015.
 [1151] S. Hu, Y. Liu, Z. Liu, T. Chen, J. Wang, Q. Yu, L. Deng, Y. Yin, and S. Hosaka, “Associative memory realized by a reconfigurable memristive hopfield neural network,” Nature communications, vol. 6, 2015.
 [1152] B. Liu, Y. Chen, B. Wysocki, and T. Huang, “The circuit realization of a neuromorphic computing system with memristorbased synapse design,” in Neural Information Processing. Springer, 2012, pp. 357–365.
 [1153] ——, “Reconfigurable neuromorphic computing system with memristorbased synapse design,” Neural Processing Letters, vol. 41, no. 2, pp. 159–167, 2013.
 [1154] ——, “Reconfigurable neuromorphic computing system with memristorbased synapse design,” Neural Processing Letters, vol. 41, no. 2, pp. 159–167, 2015.
 [1155] J. A. Clemente, W. Mansour, R. Ayoubi, F. Serrano, H. Mecha, H. Ziade, W. El Falou, and R. Velazco, “Hardware implementation of a faulttolerant hopfield neural network on fpgas,” Neurocomputing, vol. 171, pp. 1606–1609, 2016.
 [1156] H. Harmanani, J. Hannouche, and N. Khoury, “A neural networks algorithm for the minimum colouring problem using fpgas,” International Journal of Modelling and Simulation, vol. 30, no. 4, pp. 506–513, 2010.
 [1157] W. Mansour, R. Ayoubi, H. Ziade, R. Velazco, and W. El Falou, “An optimal implementation on fpga of a hopfield neural network,” Advances in Artificial Neural Systems, vol. 2011, p. 7, 2011.
 [1158] M. A. d. A. d. Sousa, E. L. Horta, S. T. Kofuji, and E. DelMoralHernandez, “Architecture analysis of an fpgabased hopfield neural network,” Advances in Artificial Neural Systems, vol. 2014, p. 15, 2014.
 [1159] A. Srinivasulu, “Digital verylargescale integration (vlsi) hopfield neural network implementation on field programmable gate arrays (fpga) for solving constraint satisfaction problems,” Journal of Engineering and Technology Research, vol. 4, no. 1, pp. 11–21, 2012.
 [1160] A. G. Andreou and K. A. Boahen, “Synthetic neural circuits using currentdomain signal representations,” Neural Computation, vol. 1, no. 4, pp. 489–501, 1989.
 [1161] A. G. Andreou, K. A. Boahen, P. O. Pouliquen, A. Pavasovic, R. E. Jenkins, and K. Strohbehn, “Currentmode subthreshold mos circuits for analog vlsi neural systems,” IEEE transactions on neural networks/a publication of the IEEE Neural Networks Council, vol. 2, no. 2, pp. 205–213, 1990.
 [1162] K. A. Boahen, A. G. Andreou, P. O. Pouliquen, and A. Pavasovic, “Architectures for associative memories using currentmode analog mos circuits,” in Proceedings of the decennial Caltech conference on VLSI on Advanced research in VLSI. MIT Press, 1989, pp. 175–193.
 [1163] K. A. Boahen, P. O. Pouliquen, A. G. Andreou, and R. E. Jenkins, “A heteroassociative memory using currentmode mos analog vlsi circuits,” Circuits and Systems, IEEE Transactions on, vol. 36, no. 5, pp. 747–755, 1989.
 [1164] R. E. Howard, D. B. Schwartz, J. S. Denker, R. W. Epworth, H. P. Graf, W. E. Hubbard, L. D. Jackel, B. L. Straughn, and D. Tennant, “An associative memory based on an electronic neural network architecture,” Electron Devices, IEEE Transactions on, vol. 34, no. 7, pp. 1553–1556, 1987.
 [1165] T. Kaulmann, M. Ferber, U. Witkowski, and U. Rückert, “Analog vlsi implementation of adaptive synapses in pulsed neural networks,” in Computational Intelligence and Bioinspired Systems. Springer, 2005, pp. 455–462.
 [1166] B. LinaresBarranco, E. SánchezSinencio, A. RodriguezVazquez, and J. L. Huertas, “A cmos analog adaptive bam with onchip learning and weight refreshing,” Neural Networks, IEEE Transactions on, vol. 4, no. 3, pp. 445–455, 1993.
 [1167] B. J. Maundy and E. I. ElMasry, “Feedforward associative memory switchedcapacitor artificial neural networks,” Analog Integrated Circuits and Signal Processing, vol. 1, no. 4, pp. 321–338, 1991.
 [1168] C. McCarley and P. Szabo, “Analog vlsi for implementation of a ?hyperassociative memory? neural network,” in Systems, Man and Cybernetics, 1995. Intelligent Systems for the 21st Century., IEEE International Conference on, vol. 3. IEEE, 1995, pp. 2076–2080.
 [1169] K. Saeki, T. Morita, and Y. Sekine, “Associative memory using pulsetype hardware neural network with stdp synapses,” in Intelligent Systems Design and Applications (ISDA), 2011 11th International Conference on. IEEE, 2011, pp. 947–951.
 [1170] S. R. Hasan and N. K. Siong, “A vlsi bam neural network chip for pattern recognition applications,” in Neural Networks, 1995. Proceedings., IEEE International Conference on, vol. 1. IEEE, 1995, pp. 164–168.
 [1171] H. Graf and P. De Vegvar, “A cmos associative memory chip based on neural networks,” in SolidState Circuits Conference. Digest of Technical Papers. 1987 IEEE International, vol. 30. IEEE, 1987, pp. 304–305.
 [1172] H. P. Graf, L. D. Jackel, and W. E. Hubbard, “Vlsi implementation of a neural network model,” Computer, vol. 21, no. 3, pp. 41–49, 1988.
 [1173] A. Heittmann and U. Rückert, “Mixed mode vlsi implementation of a neural associative memory,” Analog Integrated Circuits and Signal Processing, vol. 30, no. 2, pp. 159–172, 2002.
 [1174] U. Ruckert and K. Goser, “Vlsi architectures for associative networks,” in Circuits and Systems, 1988., IEEE International Symposium on. IEEE, 1988, pp. 755–758.
 [1175] U. Ruckert, “An associative memory with neural architecture and its vlsi implementation,” in System Sciences, 1991. Proceedings of the TwentyFourth Annual Hawaii International Conference on, vol. 1. IEEE, 1991, pp. 212–218.
 [1176] C.Y. Wu and J.F. Lan, “Cmos currentmode neural associative memory design with onchip learning,” Neural Networks, IEEE Transactions on, vol. 7, no. 1, pp. 167–181, 1996.
 [1177] Y. V. Pershin and M. Di Ventra, “Experimental demonstration of associative memory with memristive neural networks,” Neural Networks, vol. 23, no. 7, pp. 881–886, 2010.
 [1178] L. Wang, H. Li, S. Duan, T. Huang, and H. Wang, “Pavlov associative memory in a memristive neural network and its circuit implementation,” Neurocomputing, 2015.
 [1179] D. Hammerstrom, C. Gao, S. Zhu, and M. Butts, “Fpga implementation of very large associative memories–scaling issues,” 2003.
 [1180] B. J. Leiner, V. Q. Lorena, T. M. Cesar, and M. V. Lorenzo, “Hardware architecture for fpga implementation of a neural network and its application in images processing,” in Electronics, Robotics and Automotive Mechanics Conference, 2008. CERMA’08. IEEE, 2008, pp. 405–410.
 [1181] J. Li, Y. Katori, and T. Kohno, “Hebbian learning in fpga silicon neuronal network,” in The 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013 (ICISIP2013), 2013.
 [1182] D. Reay, T. Green, and B. Williams, “Field programmable gate array implementation of a neural network accelerator,” in Hardware Implementation of Neural Networks and Fuzzy Logic, IEE Colloquium on. IET, 1994, pp. 2–1.
 [1183] E. Neftci, “Stochastic neuromorphic learning machines for weakly labeled data,” in Computer Design (ICCD), 2016 IEEE 34th International Conference on. IEEE, 2016, pp. 670–673.
 [1184] E. O. Neftci, B. U. Pedroni, S. Joshi, M. AlShedivat, and G. Cauwenberghs, “Stochastic synapses enable efficient braininspired learning machines,” Frontiers in Neuroscience, vol. 10, 2016.
 [1185] A. Torralba, F. Colodro, E. Ibanez, and L. G. Franquelo, “Two digital circuits for a fully parallel stochastic neural network,” Neural Networks, IEEE Transactions on, vol. 6, no. 5, pp. 1264–1268, 1995.
 [1186] W.C. Fang, B. J. Sheu, and J.C. Lee, “Realtime computing of optical flow using adaptive vlsi neuroprocessors,” in Computer Design: VLSI in Computers and Processors, 1990. ICCD’90. Proceedings, 1990 IEEE International Conference on. IEEE, 1990, pp. 122–125.
 [1187] S. L. Bade and B. L. Hutchings, “Fpgabased stochastic neural networksimplementation,” in FPGAs for Custom Computing Machines, 1994. Proceedings. IEEE Workshop on. IEEE, 1994, pp. 189–198.
 [1188] H. Li, Y. Hayakawa, S. Sato, and K. Nakajima, “A new digital architecture of inverse function delayed neuron with the stochastic logic,” in Circuits and Systems, 2004. MWSCAS’04. The 2004 47th Midwest Symposium on, vol. 2. IEEE, 2004, pp. II–393.
 [1189] N. Nedjah and L. de Macedo Mourelle, “Stochastic reconfigurable hardware for neural networks,” in Digital System Design, 2003. Proceedings. Euromicro Symposium on. IEEE, 2003, pp. 438–442.
 [1190] ——, “Fpgabased hardware architecture for neural networks: binary radix vs. stochastic,” in null. IEEE, 2003, p. 111.
 [1191] ——, “Reconfigurable hardware for neural networks: binary versus stochastic,” Neural Computing and Applications, vol. 16, no. 3, pp. 249–255, 2007.
 [1192] M. van Daalen, P. Jeavons, and J. ShaweTaylor, “A stochastic neural architecture that exploits dynamically reconfigurable fpgas,” in FPGAs for Custom Computing Machines, 1993. Proceedings. IEEE Workshop on. IEEE, 1993, pp. 202–211.
 [1193] A. Jayakumar and J. Alspector, “A cascadable neural network chip set with onchip learning using noise and gain annealing,” in Custom Integrated Circuits Conference, 1992., Proceedings of the IEEE 1992. IEEE, 1992, pp. 19–5.
 [1194] H. Pujol, J. Klein, E. Belhaire, and P. Garda, “Ra: An analog neurocomputer for the synchronous boltzmann machine,” in Microelectronics for Neural Networks and Fuzzy Systems, 1994., Proceedings of the Fourth International Conference on. IEEE, 1994, pp. 449–455.
 [1195] C. Schneider and H. Card, “Analog vlsi models of mean field networks,” in VLSI for Artificial Intelligence and Neural Networks. Springer, 1991, pp. 185–194.
 [1196] C. R. Schneider and H. C. Card, “Analog cmos deterministic boltzmann circuits,” SolidState Circuits, IEEE Journal of, vol. 28, no. 8, pp. 907–914, 1993.
 [1197] Y. Arima, K. Mashiko, K. Okada, T. Yamada, A. Maeda, H. Kondoh, and S. Kayano, “A selflearning neural network chip with 125 neurons and 10 k selforganization synapses,” SolidState Circuits, IEEE Journal of, vol. 26, no. 4, pp. 607–611, 1991.
 [1198] Y. Arima, K. Mashiko, K. Okada, T. Yamada, A. Maeda, H. Notani, H. Kondoh, and S. Kayano, “A 336neuron, 28 ksynapse, selflearning neural network chip with branchneuronunit architecture,” SolidState Circuits, IEEE Journal of, vol. 26, no. 11, pp. 1637–1644, 1991.
 [1199] S. Sato, M. Yumine, T. Yama, J. Murota, K. Nakajima, and Y. Sawada, “Lsi implementation of pulseoutput neural network with programmable synapse,” in Neural Networks, 1992. IJCNN., International Joint Conference on, vol. 1. IEEE, 1992, pp. 172–177.

[1200]
M. N. Bojnordi and E. Ipek, “Memristive boltzmann machine: A hardware accelerator for combinatorial optimization and deep learning,” in
2016 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2016, pp. 1–13. 
[1201]
H. Chen, P. Fleury, and A. F. Murray, “Minimising contrastive divergence in noisy, mixedmode vlsi neurons,” in
Advances in Neural Information Processing Systems, 2003, p. None.  [1202] C. Lu, C. Hong, and H. Chen, “A scalable and programmable architecture for the continuous restricted boltzmann machine in vlsi,” in Circuits and Systems, 2007. ISCAS 2007. IEEE International Symposium on. IEEE, 2007, pp. 1297–1300.
 [1203] C.C. Lu and H. Chen, “Currentmode computation with noise in a scalable and programmable probabilistic neural vlsi system,” in Artificial Neural Networks–ICANN 2009. Springer, 2009, pp. 401–409.
 [1204] P. Knag, C. Liu, and Z. Zhang, “A 1.40 mm 2 141mw 898gops sparse neuromorphic processor in 40nm cmos,” in VLSI Circuits (VLSICircuits), 2016 IEEE Symposium on. IEEE, 2016, pp. 1–2.
 [1205] B. U. Pedroni, S. Das, J. V. Arthur, P. A. Merolla, B. L. Jackson, D. S. Modha, K. KreutzDelgado, and G. Cauwenberghs, “Mapping generative models onto a network of digital spiking neurons,” IEEE transactions on biomedical circuits and systems, vol. 10, no. 4, pp. 837–854, 2016.
 [1206] M. Rafique, B. Lee, and M. Jeon, “Hybrid neuromorphic system for automatic speech recognition,” Electronics Letters, vol. 52, no. 17, pp. 1428–1430, 2016.
 [1207] A. M. Sheri, A. Rafique, W. Pedrycz, and M. Jeon, “Contrastive divergence for memristorbased restricted boltzmann machine,” Engineering Applications of Artificial Intelligence, vol. 37, pp. 336–342, 2015.
 [1208] M. Suri, V. Parmar, A. Kumar, D. Querlioz, and F. Alibart, “Neuromorphic hybrid rramcmos rbm architecture,” in 2015 15th NonVolatile Memory Technology Symposium (NVMTS). IEEE, 2015, pp. 1–6.
 [1209] S. K. Kim, L. C. McAfee, P. L. McMahon, and K. Olukotun, “A highly scalable restricted boltzmann machine fpga implementation,” in Field Programmable Logic and Applications, 2009. FPL 2009. International Conference on. IEEE, 2009, pp. 367–372.
 [1210] L.W. Kim, S. Asaad, and R. Linsker, “A fully pipelined fpga architecture of a factored restricted boltzmann machine artificial neural network,” ACM Transactions on Reconfigurable Technology and Systems (TRETS), vol. 7, no. 1, p. 5, 2014.
 [1211] D. Le Ly and P. Chow, “Highperformance reconfigurable hardware architecture for restricted boltzmann machines,” Neural Networks, IEEE Transactions on, vol. 21, no. 11, pp. 1780–1792, 2010.
 [1212] S. Das, B. U. Pedroni, P. Merolla, J. Arthur, A. S. Cassidy, B. L. Jackson, D. Modha, G. Cauwenberghs, and K. KreutzDelgado, “Gibbs sampling with lowpower spiking digital neurons,” in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on. IEEE, 2015, pp. 2704–2707.
 [1213] B. Ahn, “Computation of deep belief networks using specialpurpose hardware architecture,” in Neural Networks (IJCNN), 2014 International Joint Conference on. IEEE, 2014, pp. 141–148.
 [1214] K. Sanni, G. Garreau, J. L. Molin, and A. G. Andreou, “Fpga implementation of a deep belief network architecture for character recognition using stochastic computation,” in Information Sciences and Systems (CISS), 2015 49th Annual Conference on. IEEE, 2015, pp. 1–5.
 [1215] H. C. Card, C. R. Schneider, and R. S. Schneider, “Learning capacitive weights in analog cmos neural networks,” Journal of VLSI signal processing systems for signal, image and video technology, vol. 8, no. 3, pp. 209–225, 1994.
 [1216] G. Cauwenberghs, C. F. Neugebauer, and A. Yariv, “An adaptive cmos matrixvector multiplier for large scale analog hardware neural network applications,” in Neural Networks, 1991., IJCNN91Seattle International Joint Conference on, vol. 1. IEEE, 1991, pp. 507–511.
 [1217] C. Schneider and H. Card, “Cmos implementation of analog hebbian synaptic learning circuits,” in Neural Networks,
Comments
There are no comments yet.