Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing

by   Steven K. Esser, et al.

Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that i) approach state-of-the-art classification accuracy across 8 standard datasets, encompassing vision and speech, ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1200 and 2600 frames per second and using between 25 and 275 mW (effectively > 6000 frames / sec / W) and iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. For the first time, the algorithmic power of deep learning can be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.



There are no comments yet.


page 4

page 5


Structured Convolution Matrices for Energy-efficient Deep learning

We derive a relationship between network representation in energy-effici...

Classification Accuracy Improvement for Neuromorphic Computing Systems with One-level Precision Synapses

Brain inspired neuromorphic computing has demonstrated remarkable advant...

Design of Many-Core Big Little μBrain for Energy-Efficient Embedded Neuromorphic Computing

As spiking-based deep learning inference applications are increasing in ...

Neuromorphic Nearest-Neighbor Search Using Intel's Pohoiki Springs

Neuromorphic computing applies insights from neuroscience to uncover inn...

Teaching a neural network with non-tunable exciton-polariton nodes

In contrast to software simulations of neural networks, hardware or neur...

Design Space Exploration of Hardware Spiking Neurons for Embedded Artificial Intelligence

Machine learning is yielding unprecedented interest in research and indu...

Neuromorphic computing with nanoscale spintronic oscillators

Neurons in the brain behave as non-linear oscillators, which develop rhy...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Abstract

Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that i) approach state-of-the-art classification accuracy across standard datasets, encompassing vision and speech, ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between and frames per second and using between and mW (effectively frames / sec / W) and iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. For the first time, the algorithmic power of deep learning can be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

2 Approach

Here, we provide a description of the relevant elements of deep convolutional networks and the TrueNorth neuromorphic chip, and describe how the essence of the former can be realized on the latter.

2.1 Deep Convolutional Networks

A deep convolutional network is a multilayer feedforward neural network, whose input is typically image-like and whose layers are neurons that collectively perform a convolutional filtering of the input or a prior layer (Figure

1). Neurons within a layer are arranged in two spatial dimensions, corresponding to shifts in the convolution filter, and one feature dimension, corresponding to different filters. Each neuron computes a summed weighted input, , as

where are the neuron’s input pixels or neurons, are the filter weights, , are over the topographic dimensions, and

is over the feature dimension or input channels. Batch normalization

[26] can be used to zero center

and normalize its standard deviation to

, following


where is the filter response, is a bias term, provides numerical stability, and and are the mean and standard deviation of

computed per filter using all topographic locations and examples in a data batch during training, or using the entire training set during inference. Final neuron output is computed by applying a non-linear activation function to the filter response, typically a rectified linear unit that sets negative values to

[27]. In a common scheme, features in the last layer are each assigned a label – such as prediction class – and vote to formulate network output [9].

Deep networks are trained using the backpropagation learning rule [1]. This procedure involves iteratively i) computing the network’s response to a batch of training examples in a forward pass

, ii) computing the error between the network’s output and the desired output, iii) using the chain rule to compute the error gradient at each synapse in a

backward pass, and iv) making a small change to each weight along this gradient so as to reduce error.

Figure 1: Two layers of a convolutional network, where each layer is a rows columns features collection of outputs from filters applied to the prior layer. Each output neuron has a topographically aligned filter support region in its source layer. Adjacent features have their receptive field shifted by the stride in the source layer. A layer can be divided into multiple groups along the feature dimension, where each group has a filter support region that covers a different set of features in the source layer. Two groups are highlighted (green, blue).

2.2 TrueNorth

A TrueNorth chip consists of a network of neurosynaptic cores with programmable connectivity, synapses, and neuron parameters (Figure 2). Connectivity between neurons follows a block-wise scheme: each neuron can connect to an input line of any one core in the system, and from there to any neuron on that core through its local synapses. All communication to-, from-, and within- chip is performed using spikes.

TrueNorth neurons use a variant of an integrate-and-fire model with configurable parameters [28] where a neuron’s state variable, , updates each tick, – typically at ticks per second, though higher rates are possible – according to


where are the neuron’s spiking inputs, are its corresponding weights, is its leak chosen from , and is over its inputs. If is greater than or equal to a threshold , the neuron emits a spike and resets using one of several reset modes, including resetting to . If is below a lower bound, it can be configured to snap to that bound.

Synapses have individually configurable on/off states and have a strength assigned by look-up table. Specifically, each neuron has a -entry table parameterized with values in the range , each input line to a core is assigned an input type of , , or , and each synapse then determines its strength by using the input type on its source side to index into the table of the neuron on its target side.111It should be noted that our approach can easily be adapted to hardware with other synaptic representation schemes. In this work, we only use 2 input types, corresponding to synapse strengths of -1 and 1, described in the next section.

Figure 2: A) The TrueNorth architecture, a multicore array where each core consists of input lines, neurons, and a synaptic crossbar array. Each neuron can connect to input on core through a spike router, and can connect to any neuron on the target core through the crossbar, thereby resulting in block-wise connectivity. B) IBM’s NS1e board, with 1 TrueNorth chip, comprising cores, with a total of Million neurons and Million synapses.

2.3 Mapping Deep Convolutional Networks to TrueNorth

By appropriately designing the structure, neurons, network input, and weights of convolutional networks during training, it is possible to efficiently map those networks to neuromorphic hardware.

2.3.1 Structure

Network structure is mapped by partitioning each layer into or more equally sized groups along the feature dimension,222Feature groups were originally used by AlexNet[27], which split the network to run on parallel GPUs during training. The use of grouping is expanded upon considerably in this work. where each group applies its filters to a different, non-overlapping, equally sized subset of layer input features. Layers are designed such that the total filter size (rows columns features) of each group is less than or equal to the number of input lines available per core, and the number of output features is less than or equal to the number of neurons per core. This arrangement allows group’s features, filters, and filter support region to be implemented using core’s neurons, synapses, and input lines, respectively (Figure 3A). Total filter size was further limited to here, to support trinary synapses, described below. For efficiency, multiple topographic locations for the same group can be implemented on the same core. For example, by delivering a region of the input space to a single core, that core can be used to implement overlapping filters of size for topographic locations.

Where filters implemented on different cores are applied to overlapping regions of the input space, the corresponding input neurons must target multiple cores, which is not explicitly supported by TrueNorth. In such instances, multiple neurons on the same core are configured with identical synapses and parameters (and thus will have matching output), allowing distribution of the same data to multiple targets. If insufficient neurons are available on the same core, a feature can be “split” by connecting it to a core with multiple neurons configured to spike whenever they receive an input spike from that feature. Neurons used in either duplication scheme are referred to here as copies (Figure 3B).

Figure 3: Mapping of a convolutional network to TrueNorth. A) Convolutional network features for group at topographic location are implemented using neurons on the same TrueNorth core, with their corresponding filter support region implemented using the core’s input lines, and filter weights implemented using the core’s synaptic array. B) For a neuron to target multiple core inputs, its output must be replicated by neuron copies, recruited from other neurons on the same core, or on extra cores if needed. C) In each tick, the internal state variable, V(t), of a TrueNorth neuron increases in response to positive weighted inputs (light orange circles) and decreases in response to negative weighted inputs (dark blue circles). Following integration of all inputs for a tick, if V(t) is greater than or equal to the threshold , a spike is emitted and V(t) is reset to 0, while if V(t) is below 0, V(t) is reset to 0 (without producing a spike). D) Convolutional network filter weights (numbers in black diamonds) implemented using TrueNorth. The TrueNorth architecture supports weights with individually configured on/off state and strength assigned using a lookup table. In our scheme, each feature is represented with pairs of neuron copies. Each pair connects to inputs on the same target core, with the inputs assigned types and , which via the look up table assign strengths of or to synapses on the corresponding input lines. By turning on the appropriate synapses, each synapse pair can be used to represent , , or .

2.3.2 Neurons

To match the use of spikes in hardware, we employ a binary representation scheme for data throughout the network.333Schemes that use higher precision are possible, such as using the number of spikes generated in a given time window to represent data (a rate code). However, we observed the best accuracy for a given energy budget by using the binary scheme described here. Neurons in the convolutional network use the activation function


where is neuron output and is the neuron filter response (Equation 1). By configuring TrueNorth neurons such that i) , where is the leak from Equation 2 and the remaining variables are the normalization terms from Equation 1, which are computed from training data offline, ii) threshold ( in Equation 2) is , iii) reset is to after spiking, and iv) the lower bound on the membrane potential is , their behavior exactly matches that in Equation 3 (Figure 3C). Conditions iii and iv ensure that is at the beginning of each image presentation, allowing for classification per tick using pipelining.

2.3.3 Network input

Network inputs are typically represented with multi-bit channels (for example, -bit RGB channels). Directly converting the state of each bit into a spike would result in an unnatural neural encoding since each bit represents a different value (for example, the most-significant-bit spike would carry a weight of in an -bit scheme). Here, we avoid this awkward encoding altogether by converting the high precision input into a spiking representation using convolution filters with the binary output activation function described in Equation 3. This process is akin to the transduction that takes place in biological sensory organs, such as the conversion of brightness levels into single spikes representing spatial luminance gradients in the retina.

2.3.4 Weights

While TrueNorth does not directly support trinary weights, they can be simulated by using neuron copies such that a feature’s output is delivered in pairs to its target cores. One member of the pair is assigned input type , which corresponds to a in every neuron’s lookup table, and the second input type , which corresponds to a . By turning on neither, one, or the other of the corresponding synaptic connections, a weight of , or can be created (Figure 3D). To allow us to map into this representation, we restrict synaptic weights in the convolutional network to these same trinary values.

2.3.5 Training

Constraints on receptive field size and features per group, the use of binary neurons and use of trinary weights are all employed during training. As the binary-valued neuron used here has a derivative of at , and everywhere else, which is not amenable to backpropagation, we instead approximate its derivative as being at and linearly decaying to in the positive and negative direction according to

where is the filter response and is the neuron output. Weight updates are applied to a high precision hidden value, , which is bounded in the range to by clipping, and mapped to the trinary value used for the forward and backward pass by rounding with hysteresis according to

where is a hysteresis parameter set to 0.1 here.444This is rule is similar to the recent results from BinaryNet [18], but was developed independently here in this work. Our specific neuron derivative and use of hysteresis are unique. The hidden weights allows each synapse to flip between discrete states based on subtle differences in the relative amplitude of error gradients measured across multiple training batches.

We employ standard heuristics for training, including momentum (

), weight decay (), and decreasing learning rate (dropping by twice during training). We further employ a spike sparsity pressure by adding to the cost function, where is average feature activation, the summation is over all features in the network, and is a parameter, set to here. This serves as both a regularizer and to reduce spike traffic during deployment (and therefore reduce energy consumption). Training was performed offline on conventional GPUs, using a library of custom training layers built upon functions from the MatConvNet toolbox [20]. Network specification and training complexity using these layers is on par with standard deep learning.

2.3.6 Deployment

The parameters learned through training are mapped to hardware using reusable, composable hardware description functions called corelets [21]. The corelets created for this work automatically compile the learned network parameters, which are independent of any neuromorphic platform, into an platform-specific hardware configuration file that can directly program TrueNorth chips.

Figure 4: Dataset samples. A) CIFAR10 examples of airplane and automobile. B) SVHN examples of the digits ‘4’ and ‘7’. C) GTSRB examples of the German traffic signs for ‘priority road’ and ‘ahead only’. D) Flickr-Logos32 examples of corporate logos for ‘FedEx’ and ‘Texaco’. E) VAD example showing voice activity (red box) and no voice activity at dB SNR. F) TIMIT examples of the phonemes ‘pcl’, ‘p’, ‘l’, ‘ah’, ‘z’ (red box), ‘en’, ‘l’, ‘ix’.

3 Results

We applied our approach to image and audio benchmarks by creating template networks using , , , or TrueNorth chips555Additional network sizes for the audio datasets (VAD, TIMIT classification, TIMIT frames) were created by adjusting features per layer or removing layers. (Tables 1 and 2 and Figure 4). Testing was performed at classification per hardware tick.

3.1 Networks

Three layer configurations were especially useful in this work, though our approach supports a variety of other parameterizations. First, spatial filter layers employ patch size and stride , allowing placement of topographic locations per core. Second, network-in-network layers (see [9]) employ patch size and stride of 1, allowing each filter to span a large portion of the incoming feature space, thereby helping to maintain network integration. Finally, pooling layers employ standard convolution layers [29] with patch size and stride 2, thereby resulting in non-overlapping patches that reduce the need for neuron copies.

We found that using up to channels for the transduction layer (Figure 5) gave good performance at a low bandwidth. For multi-chip networks we used additional channels, presupposing additional bandwidth in larger systems. As smaller networks required less regularization, weight decay was not employed for networks smaller than chips, and spike sparsity pressure was not used for networks half chip size or less.

1/2 Chip 1 Chip 2 Chip 4 Chip
S-12 S-16 S-32 S-64
P4-128(4) P4-252(2) S-128(4) S-256(8)
D N-256(2) N-128(1) N-256(2)
S-256(16) P-256(8) P-128(4) P-256(8)
N-256(2) S-512(32) S-256(16) S-512(32)
P-512(16) N-512(4) N-256(2) N-512(4)
S-1020(4) N-512(4) P-256(8) P-512(16)
(6528/class) N-512(4) S-512(32) S-1024(64)
P-512(16) N-512(4) N-1024(8)
S-1024(64) P-512(16) P-1024(32)
N-1024(8) S-2048(64) S-2048(128)
P-1024(32) N-2048(16) N-2048(16)
N-1024(8) N-2048(16) N-2048(16)
N-1024(8) N-2048(16) N-2048(16)
N-2040(8) N-4096(16) N-4096(16)
(816/class) (6553/class) (6553/class)
Table 1: Structure of convolution networks used in this work. Each layer is described as ”type-features(groups)”, where types are indicated with “S” for spatial filter layers with filter size and stride , “N” for network-in-network layers with filter size and stride , “P” for convolutional pooling layer with filter size and stride , and “P” for convolutional pooling layer with filter size and stride , and “D” for dropout layers. The number of output features assigned to each of the 10 CIFAR10 classes is indicated below the final layer as ”(features/class)”; this ratio varies for datasets with a different class count. The chip network is the same as a chip network with twice as many features per layer.

3.2 Hardware

To characterize performance, all networks that fit on a single chip were run in TrueNorth hardware. Multi-chip networks were run in simulation [30], pending forthcoming infrastructure for interconnecting chips. Single-chip classification accuracy and throughput were measured on the NS1e development board (Figure 2B), but power was measured on a separate NS1t test and characterization board (not shown) – using the same supply voltage of V on both boards – since the current NS1e board is not instrumented to measure power and the NS1t board is not designed for high throughput. Total TrueNorth power is the sum of i) leakage power, computed by measuring idle power on NS1t and scaling by the fraction of the chip’s cores used by the network, and ii) active power, computed by measuring total power during classification on NS1t, subtracting idle power, and scaling by the classification throughput (FPS) measured on NS1e.666Active energy per classification does not change as the chip’s tick runs faster or slower as long as the voltage is the same (as in the experiments here) because the same number of transistors switch independent of the tick duration. For hardware measurement, our focus was to characterize operation on the TrueNorth chip as a component in a future embedded system. Such a system will also need to consider capabilities and energy requirements of sensors, transduction, and off-chip communication, which requires hardware choices that are application specific and are not considered here.

Figure 5: Each row shows an example image from CIFAR10 (column ) and the corresponding output of typical transduction filters (columns ).

3.3 Performance

Table 3 and Figure 6 show our results for all datasets and a comparison with state-of-the-art approaches, with measured power and classifications per energy (Frames/Sec/Watt) reported for single-chip networks. It is know that augmenting training data through manipulations such as mirroring can improve scores on test data, but this adds complexity to the overall training process. To maintain focus on the algorithm presented here, we do not augment our training set, including no dropout, and so compare our results to other works that also do not use data augmentation. Our experiments show that for almost all of the benchmarks, a single-chip network is sufficient to come within a few percent of state-of-the-art accuracy. Increasing to up to chips improved accuracy by several percentage points, and in the case of the VAD dataset surpassed state-of-the-art performance.

Dataset Classes Input Description
CIFAR10 [31] 10 row column RGB Natural and manufactured objects in their environment.
CIFAR100 [31] 100 row column RGB Natural and manufactured objects in their environment.
SVHN [32] 10 row column RGB Single digits of house addresses from Google’s Street View.
GTSRB [33] 43 row column RGB German traffic signs in multiple environments.
Flickr-Logos32 [34] 32 row column RGB Localized Corporate logos in their environment.
VAD [35][36] 2 sample MFCC Voice activity present or absent, with noise (TIMIT + NOISEX).
TIMIT Class. [35] 39 sample MFCC delta Phonemes from English speakers, with phoneme boundaries.
TIMIT Frame [35] 39 sample MFCC Phonemes from English speakers, without phoneme boundaries.
Table 2: Summary of datasets. GTSRB and Flickr-Logos32 are cropped and/or downsampled from larger images. VAD and TIMIT datasets have Mel-frequency cepstral coefficients (MFCC) computed from kHz audio data.
Dataset State of the Art TrueNorth Best Accuracy TrueNorth Chip
Approach Accuracy Accuracy #cores Accuracy #cores FPS mW FPS/W
CIFAR100 CNN[37]
LOGO32 CNNUnpublished internal implementation.
TIMIT Class. HGMM[40]
TIMIT Frames BLSTM[41]
Table 3:

Summary of results. TrueNorth network sizes refer to chip count of network running CIFAR10. Individual networks may vary according to data size. Power is measured in milliWatts (mW). FPS = Frames/Sec. FPS/W = Frames/Sec/Watt. CNN = Convolutional Neural Network. MLP = Multilayer Perceptron. HGMM = Hierarchical Gaussian Mixture Model. BLSTM = Bidirectional Long Short-Term Memory.

Figure 6: Accuracy of different sized networks running on one or more TrueNorth chips to perform inference on datasets. For comparison, accuracy of state-of-the-art unconstrained approaches are shown as bold horizontal lines (hardware resources used for these networks are not indicated).

4 Discussion

Our work demonstrates that the structural and operational differences between neuromorphic computing and deep learning are not fundamental, and points to the richness of neural network constructs and the adaptability of backpropagation. This marks an important step towards a new generation of applications based on embedded neural networks.

These results help to validate the neuromorphic approach, which is to provide an efficient yet flexible substrate for spiking neural networks, instead of targeting a single application or network structure. Indeed, the specification for TrueNorth and a prototype chip [42] were developed in 2011, before the recent resurgence of convolutional networks in 2012 [27]. Not only is TrueNorth capable of implementing these convolutional networks, which it was not originally designed for, but it also supports a variety of connectivity patterns (feedback and lateral, as well as feedforward) and can simultaneously implement a wide range of other algorithms (see [8][15][17][43][44][45]). We envision running multiple networks on the same TrueNorth chip, enabling composition of end-to-end systems encompassing saliency, classification, and working memory.

We see several avenues of potentially fruitful exploration for future work. Several recent innovations in unconstrained deep learning that may be of value for the neuromorphic domain include deeply supervised networks [37], and modified gradient optimization rules. The approach used here applies hardware constraints from the beginning of training, that is, constrain-then-train, but innovation may also come from constrain-while-train approaches, where training initially begins in an unconstrained space, but constraints are gradually introduced during training [14]. Finally, co-design between algorithms and future neuromorphic architectures promises even better accuracy and efficiency.

This research was sponsored by the Defense Advanced Research Projects Agency under contract No. FA9453-15-C-0055. The views, opinions, and/or findings contained in this paper are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. We acknowledge Rodrigo Alvarez-Icaza for support with hardware infrastructure.


  • [1] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, pp. 533–536, 1986.
  • [2]

    K. Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,”

    Biological Cybernetics, vol. 36, no. 4, pp. 193–202, 1980.
  • [3] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural computation, vol. 1, no. 4, pp. 541–551, 1989.
  • [4] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , 2015, pp. 1–9.
  • [5] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in Neural Information Processing Systems, 2015, pp. 91–99.
  • [6] D. Cireşan, A. Giusti, L. M. Gambardella, and J. Schmidhuber, “Deep neural networks segment neuronal membranes in electron microscopy images,” in Advances in neural information processing systems, 2012, pp. 2843–2851.
  • [7] C. Mead, “Neuromorphic electronic systems,” Proceedings of the IEEE, vol. 78, no. 10, pp. 1629–1636, 1990.
  • [8] P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura et al., “A million spiking-neuron integrated circuit with a scalable communication network and interface,” Science, vol. 345, no. 6197, pp. 668–673, 2014.
  • [9] M. Lin, Q. Chen, and S. Yan, “Network in network,” In ICLR, 2014. [Online]. Available:
  • [10] T. M. Bartol, C. Bromer, J. Kinney, M. A. Chirillo, J. N. Bourne, K. M. Harris, and T. J. Sejnowski, “Nanoconnectomic upper bound on the variability of synaptic plasticity,” eLife, vol. 4, p. e10778, 2016.
  • [11] J. Jin, A. Dundar, and E. Culurciello, “Flattened convolutional neural networks for feedforward acceleration,” arXiv preprint arXiv:1412.5474, 2014.
  • [12]

    E. Stromatias, D. Neil, M. Pfeiffer, F. Galluppi, S. B. Furber, and S.-C. Liu, “Robustness of spiking deep belief networks to noise and reduced bit precision of neuro-inspired hardware platforms,”

    Frontiers in neuroscience, vol. 9, 2015.
  • [13] M. Courbariaux, Y. Bengio, and J.-P. David, “Binaryconnect: Training deep neural networks with binary weights during propagations,” in Advances in Neural Information Processing Systems, 2015, pp. 3105–3113.
  • [14] Z. Wu, D. Lin, and X. Tang, “Adjustable bounded rectifiers: Towards deep binary representations,” arXiv preprint arXiv:1511.06201, 2015.
  • [15] P. U. Diehl, B. U. Pedroni, A. Cassidy, P. Merolla, E. Neftci, and G. Zarrella, “Truehappiness: Neuromorphic emotion recognition on truenorth,” arXiv preprint arXiv:1601.04183, 2016.
  • [16] S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections for efficient neural network,” in Advances in Neural Information Processing Systems, 2015, pp. 1135–1143.
  • [17] S. K. Esser, R. Appuswamy, P. Merolla, J. V. Arthur, and D. S. Modha, “Backpropagation for energy-efficient neuromorphic computing,” in Advances in Neural Information Processing Systems, 2015, pp. 1117–1125.
  • [18] M. Courbariaux and Y. Bengio, “Binarynet: Training deep neural networks with weights and activations constrained to+ 1 or-1,” arXiv preprint arXiv:1602.02830, 2016.
  • [19] A. J. Bell and T. J. Sejnowski, “The independent components of natural scenes are edge filters,” Vision research, vol. 37, no. 23, pp. 3327–3338, 1997.
  • [20] A. Vedaldi and K. Lenc, “MatConvNet – convolutional neural networks for MATLAB,” 2015.
  • [21] A. Amir, P. Datta, W. P. Risk, A. S. Cassidy, J. A. Kusnitz, S. K. Esser, A. Andreopoulos, T. M. Wong, M. Flickner, R. Alvarez-Icaza et al., “Cognitive computing programming paradigm: a corelet language for composing networks of neurosynaptic cores,” in Neural Networks (IJCNN), The 2013 International Joint Conference on.   IEEE, 2013, pp. 1–10.
  • [22] E. Painkras, L. Plana, J. Garside, S. Temple, F. Galluppi, C. Patterson, D. R. Lester, A. D. Brown, S. B. Furber et al., “Spinnaker: A 1-w 18-core system-on-chip for massively-parallel neural network simulation,” Solid-State Circuits, IEEE Journal of, vol. 48, no. 8, pp. 1943–1953, 2013.
  • [23] T. Pfeil, A. Grübl, S. Jeltsch, E. Müller, P. Müller, M. A. Petrovici, M. Schmuker, D. Brüderle, J. Schemmel, and K. Meier, “Six networks on a universal neuromorphic computing substrate,” Frontiers in Neuroscience, vol. 7, 2013.
  • [24] S. Moradi and G. Indiveri, “An event-based neural network architecture with an asynchronous programmable synaptic memory,” Biomedical Circuits and Systems, IEEE Transactions on, vol. 8, no. 1, pp. 98–107, 2014.
  • [25] J. Park, S. Ha, T. Yu, E. Neftci, and G. Cauwenberghs, “A 65k-neuron 73-mevents/s 22-pj/event asynchronous micro-pipelined integrate-and-fire array transceiver,” in Biomedical Circuits and Systems Conference (BioCAS), 2014 IEEE.   IEEE, 2014, pp. 675–678.
  • [26] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015.
  • [27]

    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in

    Advances in Neural Information Processing Systems, 2012, pp. 1097–1105.
  • [28] A. S. Cassidy, P. Merolla, J. V. Arthur, S. K. Esser, B. Jackson, R. Alvarez-Icaza, P. Datta, J. Sawada, T. M. Wong, V. Feldman et al., “Cognitive computing building block: A versatile and efficient digital neuron model for neurosynaptic cores,” in Neural Networks (IJCNN), The 2013 International Joint Conference on.   IEEE, 2013, pp. 1–10.
  • [29] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” arXiv preprint arXiv:1412.6806, 2014.
  • [30] R. Preissl, T. M. Wong, P. Datta, M. Flickner, R. Singh, S. K. Esser, W. P. Risk, H. D. Simon, and D. S. Modha, “Compass: A scalable simulator for an architecture for cognitive computing,” in Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis.   IEEE Computer Society Press, 2012, p. 54.
  • [31] A. Krizhevsky, “Learning multiple layers of features from tiny images,” University of Toronto, Tech. Rep., 2009.
  • [32] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” in NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011. [Online]. Available:
  • [33]

    J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition,”

    Neural networks, vol. 32, pp. 323–332, 2012.
  • [34] S. Romberg, L. G. Pueyo, R. Lienhart, and R. V. Zwol, “Scalable logo recognition in real-world images,” in Proceedings of the 1st ACM International Conference on Multimedia Retrieval.   ACM, 2011, p. 25.
  • [35] J. Garofolo, L. Lamel, W. Fisher, J. Fiscus, D. Pallett, N. Dahlgren, and V. Zue, “TIMIT Acoustic-Phonetic Continuous Speech Corpus LDC93S1,” Philadelphia: Linguistic Data Consortium, 1993.
  • [36] A. Varga and H. J. M. Steeneken, “Assessment for Automatic Speech Recognition II: NOISEX-92: A Database and an Experiment to Study the Effect of Additive Noise on Speech Recognition Systems,” Speech Communications, vol. 12, no. 3, pp. 247–251, Jul. 1993.
  • [37] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu, “Deeply-supervised nets,” arXiv preprint arXiv:1409.5185, 2014.
  • [38] D. Cireşan, U. Meier, J. Masci, and J. Schmidhuber, “Multi-column deep neural network for traffic sign classification.” Neural networks, vol. 32, pp. 333–338, Aug. 2012.
  • [39] T. V. Pham, C. T. Tang, and M. Stadtschnitzer, “Using artificial neural network for robust voice activity detection under adverse conditions,” in International Conference on Computing and Communication Technologies.   IEEE, 2009, pp. 1–8.
  • [40] H.-A. Chang and J. R. Glass, “Hierarchical large-margin gaussian mixture models for phonetic classification,” in IEEE Workshop on Automatic Speech Recognition and Understanding, 2007.
  • [41] A. Graves, N. Jaitly, and A. Mohamed, “Hybrid speech recognition with deep bidirectional lstm,” in IEEE Workshop on Automatic Speech Recognition and Understanding, 2013, pp. 273–278.
  • [42] P. Merolla, J. Arthur, F. Akopyan, N. Imam, R. Manohar, and D. S. Modha, “A digital neurosynaptic core using embedded crossbar memory with 45pJ per spike in 45nm,” in IEEE Custom Integrated Circuits Conference (CICC), Sept. 2011, pp. 1–4.
  • [43] S. K. Esser, A. Andreopoulos, R. Appuswamy, P. Datta, D. Barch, A. Amir, J. Arthur, A. Cassidy, M. Flickner, P. Merolla et al., “Cognitive computing systems: Algorithms and applications for networks of neurosynaptic cores,” in Neural Networks (IJCNN), The 2013 International Joint Conference on.   IEEE, 2013, pp. 1–10.
  • [44] P. U. Diehl, G. Zarrella, A. Cassidy, B. U. Pedroni, and E. Neftci, “Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware,” arXiv preprint arXiv:1601.04187, 2016.
  • [45] S. Das, B. U. Pedroni, P. Merolla, J. Arthur, A. S. Cassidy, B. L. Jackson, D. Modha, G. Cauwenberghs, and K. Kreutz-Delgado, “Gibbs sampling with low-power spiking digital neurons,” IEEE International Symposium on Circuits and Systems, 2015.