The brain was an inspiration even for the pioneers of computing, like John von Neumann 
. It was for historical and practical reasons, that the von Neumann architecture of classical computers looks very different from brains. In a traditional computer, memory (DRAM) and computing (CPU) are physically separated, information is processed according to a sequential program specification mediated by a central clock, and information is represented in digital binary strings. By contrast, in brains, information is processed fundamentally in parallel with memory and computation tightly intertwined and distributed over circuits of synapses and neurons. Although there are emergent rhythms in the brain that coordinate computation as needed, there is no central clock. Finally, communication and computation in neural circuits involve both analog and digital operations. Neurons integrate synaptic input in an analog manner, which is advantageous for efficient temporal computation, but their outputs are binary-valued spikes, which is advantageous for communication. Here we will demonstrate how these brain-inspired principles can be applied to perform efficient k-nearest neighbor search on Intel’s Poihiki Springs neuromorphic research platform.
2 Loihi and Pohoiki Springs
The Loihi neuromorphic research chip is a 128-core, fully digital and asynchronous design implementing an advanced spiking neural network feature set. Its highly specialized architecture is optimized for efficiently communicating and processing event-driven spike messages. Loihi is fabricated with Intel’s standard 14nm CMOS process technology. Each one of its 128 cores implements up to 1024 digital spiking neurons with features such as variable weight precision, hierarchical and compressed network routing tables, and microcode-programmed synaptic learning rules. Additionally each Loihi chip includes three embedded x86 processors responsible for interacting with the neuromorphic cores on short timescales and converting off-chip data between conventional encodings and spikes.
Loihi also includes inter-chip communication interfaces allowing it to scale to thousands of chips in a two-dimensional mesh. Loihi has been used in several mesh-based systems to date, ranging from Kapoho Bay, a 2-chip (1x2 mesh) USB form factor device, to Nahuku, a 32-chip (4x8 mesh) custom plug-in card, to Pohoiki Beach, an early version of the Pohoiki chassis instantiating two Nahuku cards.
Pohoiki Springs, shown in Fig. 1, is the latest evolution in Loihi systems. It expands on Pohoiki Beach to a capacity of 24 Nahuku cards in a standard 19” five rack-unit chassis. The fully configured Pohoiki Springs chassis contains the following components:
24 Nahuku cards organized into three columns of 8 cards each, for a total of 768 Loihi chips in a 12x64 mesh.
Three Arria 10 FPGAs, one per column of Nahuku cards, that each include both the Arria 10 SX FPGAs and ARM processors that interface with the mesh of Loihi chips. The FPGA fabric converts the ARM AXI bus to the Loihi proprietary communications protocol, and the processor implements the networking stack. Each ARM CPU serves as the host for its allocation of Nahuku cards, responsible for data I/O and CPU-coded algorithmic interaction with its mesh of Loihi chips. The hosts communicate with the remote super host CPU over an integrated Ethernet network.
One x86-based system, a Core i5 CPU on an ATX motherboard form factor located in the rear of the Pohoiki Springs chassis. This x86 system, referred to as the super host, is used for orchestration, configuration, and other command and control duties. It can take also part in neuromorphic computation by injecting data into and interpreting results from the 768 chip mesh via the Arria 10 ARM hosts.
One embedded Ethernet switch that consolidates all internal Ethernet traffic into a single interface at the rear of the chassis.
Loihi implements a barrier synchronization mechanism that allows its architecture to seamlessly scale up from one core to Pohoiki Springs’ heterogeneous mesh of 98,304 neuromorphic cores and 2,304 embedded x86 cores. Whether within or between chips, all cores exchange barrier messages with their neighbors that signal the completion of each algorithmic timestep. The asynchronous, blocking nature of the barrier handshakes allow timesteps to run in a variable amount of real time depending on the amount of computation and communication the mesh collectively requires on each timestep. For pure computational workloads such as nearest neighbor classification, this feature allows the system to complete computations in the minimum time possible, providing latency and power benefits.
3 Nearest neighbor search
As a first demonstration of a highly scalable neuromorphic algorithm on Pohoiki Springs, we apply the neuromorphic properties introduced above to nearest-neighbor search, a problem that appears in numerous applications, such as pattern recognition, computer vision, recommendation systems, and data compression. Given a database of a large numberof -dimensional data points, the -nearest neighbor (-NN) algorithm maps a specified search key to the closest matching entries.
The performance of different -NN algorithms is measured by the time complexity of a search, as well as the time and space complexity required to prepare and store the data structures used to perform the search, referred to as the search index.
. However, exact approaches suffer from the curse of dimensionality and for large high-dimensional databases they are too computationally expensive to use in practice on conventional hardware. In recent years a variety of efficient approximate -NN implementations have been developed and are in wide use today. These employ diverse approaches such as dimension reduction, locality sensitive hashing, and compressed sensing [1, 8, 5]. Recent efforts to fairly benchmark these methods have shown that even these approximate methods must choose between minimizing either query time or index preparation time .
Here we focus on the case of nearest neighbor search on the unit sphere where distance refers to angular distance or cosine similarity between (normalized) vectors. For this distance metric, exact nearest neighbor search can be performed by a matrix vector product (MVP) between a large data matrix of high dimensional pattern vectors and the search key. Given data matrixand search key , the matrix vector multiplication approach to -NN yields a score vector of matches, the match vector:
The entry of the match vector with maximum amplitude identifies the nearest neighbor. For the -NN problem, the set of components with largest amplitudes represents the solution.
Our simple approximate algorithm computes and searches the matrix-vector product (1) on neuromorphic hardware. By encoding the data using spike-timing patterns, we can implement -NN on Pohoiki Springs at large scale.
3.1 Data encoding
Coding conventional data in a manner that is amenable for sparse, spike-based processing is a key aspect of neuromorphic algorithm design. We explain our approach using the Tiny Images dataset as an example. Similar processing is applied to GIST-960  and GloVe  datasets, and can also be applied to other datasets. For the Tiny Images dataset, the data dimensionality is pixels per image, and the number of data points will scale up to . To start, the data is mean-centered and normalized.
Image and other data can be reduced in dimensionality quite easily, often with minimal loss of information. In this work, we use PCA and ICA to transform input data patterns to lower dimensional representations. For large datasets, a representative subset can be used to compute the transform matrix . A subset of training data points from Tiny Images were used to compute the principal components , with the top kept for dimensionality reduction, down from 3072.
Following PCA reduction, the fast ICA algorithm  is used to find the ICA mixing matrix
. The mixing matrix is a unitary matrix that rotates the image into a basis with sparse coefficients. The PCA-ICA combination provides an encoding matrix,
The matrix is computed offline once and stored for later online use to encode search keys . Specifically, an image is represented as the sparse coefficients of the ICA basis vectors (Fig. 2)
The vector is a sparse representation of the image in a reduced dimensionality, and -NN can be performed in the lower dimensional space. To do so, we encode the dataset in this reduced space, with
Dot products in this reduced space will then remain very close to the true dot product, with . Without dimensionality reduction, where , these dot products would be exact. By choosing , we (1) lower the computational cost of the nearest neighbor search with minimal accuracy loss and (2) obtain a sparse lower dimensional encoding of the search key that may be efficiently transferred to Pohoiki Springs as spikes.
3.2 -NN with spiking neurons
Our neuromorphic algorithm computes -NN classification with a single layer of integrate-and-fire neurons, where each neuron’s membrane voltage represents the match of a particular data point with the search key. The synaptic weights feeding into a neuron encode the stored data point, and the spike timing of presynaptic spikes represent a search key. By pruning small components from the sparse search key representation , we reduce the amount of information that has to be communicated by spikes without significantly degrading the accuracy.
To represent search keys with spike timing, we adopt previous approaches of spike time latency codes, in which earlier spikes represent larger magnitudes [10, 16]. To represent negative amplitudes, the number of inputs is double the dimension of the vector . Negative amplitudes are turned into positive amplitudes and represented as dual components in the second half of the input vector. Thus large positive and negative amplitudes in the coefficients will both result in early spikes. The inputs therefore have antagonistic receptive fields, like ‘on-cells’ and ‘off-cells’ seen in neuroscience.
A search key is represented by a spike pattern within the input window . In this demonstration the window length is timesteps,
where and denotes the Kronecker delta function over discrete timesteps . Note that the neurons encode positive , while the neurons encode negative . The larger the absolute value of , the earlier the corresponding spike. Pruning of small components is implemented by the threshold variable . Components with absolute values smaller than the threshold are dropped. This reduces spike traffic on the hardware, but of course not all information of the input vector is transmitted. The used setting typically removes about one quarter to one third of spikes.
The synaptic weight matrix is a concatenation of the preprocessed data matrix , and a sign-inversed version of it: .
Each input spike is broadcast to all pattern match neurons where it is weighted by the synaptic strength and integrated to each neuron’s synaptic current. The postsynaptic currents are again integrated in a standard integrate-and-fire neuron. To perform these computations, the Loihi chip is configured to implement neurons with the following discrete time dynamics:
and represent the synaptic current and voltage in neuron . When the voltage crosses threshold , the neuron emits a spike and is reset to 0. A long refractory period prevents pattern match neurons from spiking more than once.
The spike encoding of the search key (5) together with synaptic multiplication and neuronal integrate-and-fire dynamics lead to a temporal code of the output spikes that reflect the order of the dot products between search key and the data points, . Note that the area under the curve of a neuron’s synaptic current, (6), can be written as , which is proportional to the dot product. The quantity is computed by the integration of the current in the voltage variable (7) (Fig. 3, right). The temporal order of output spikes, generated when the voltages exceed threshold , reflects the approximate order of matches in the search. Thus, the detection of the first output spikes implements -NN classification. Neurons that are too weakly activated by the search key to surpass the threshold do not spike at all. They represent weak matches that are excluded from even being ranked.
The approach presents tradeoffs between computation precision, energy consumption, and time, which can be adjusted to a particular problem by parameter settings:
Threshold governs the trade-off between sparsity in input spike patterns and representing components of the search key with small absolute values.
Threshold governs the average integration time and thereby the precision of the returned match list. Thus, raising the threshold increases precision at the cost of compute time.
Length of input window determines the discretization error in the representation of the search key components represented by spike times.
Synaptic resolution determines the discretization error in the representation of the stored data points.
Threshold for synaptic pruning.
The adjustments of certain parameters should be coordinated for achieving best performance at minimal resource use: Integration window and synaptic resolution determine the resolutions for search key and data points, respectively. It is reasonable to choose similar resolutions for search key and data points. Similarly, input threshold and synaptic pruning threshold should be adjusted to similar cut-off levels. The dot product is computed most precisely in neurons that spike exactly at the end of the input window, so the spike threshold should be tuned jointly with the input window.
To map nearest neighbor search to Pohoiki Springs, we take a modular approach. Subsets of the data are stored on individual Loihi chips, and the full database is distributed across the 768 chip mesh. The module defines the architecture for one chip, e.g. it sets neuron model parameters, sets the weights, and instantiates the neurons used to broadcast the input keys.
To execute a search, the input query vector is converted into a temporal spike code, and a network of routing neurons distributes the query spikes throughout the mesh. The similarity comparison is computed by a layer of output neurons that integrate the contribution of the input spikes. The spike times of the output neurons are detected by spike counters in the x86 processors embedded in each Loihi chip, which send the results back to the hosts and super host over message-passing channels. Finally, the super host merges and filters the top matches based on the timestamps attached to the returned messages.
4.1 Single chip nearest neighbor search
The -NN module for a single chip consists of spiking inputs, and spiking output neurons, with almost full connectivity. A subset of patterns will be stored on each chip in the weights between input and output neurons. The data to store is represented by the matrix , with dimension . This data is encoded by the matrix , as in (4).
The input weight matrix is encoded in the same manner as the input search coefficient vector, and is correspondingly doubled in length to match the negative components. The weights are rescaled to the range , rounded to integer values and stored as synaptic weights on a chip. We tuned the system such that the best matches would produce spikes around timestep 60. The relationship between the timing of the output spikes from a single chip query and the dot product is visualized in Fig. 5.
4.2 Distributing the search over many Loihi chips
The full pipeline of the execution phase is illustrated in Fig. 6. Given a query vector , the super host CPU computes and , which is then sent to the Pohoiki Springs host CPUs for further distribution into the Loihi mesh as a list of spike indices and spike times. Each host sends to the first x86 processor embedded in its column of 256 Loihi chips (Fig. 6, K). The embedded processor K injects the spikes into a network of routing neurons (Fig. 6, R) that distribute the spikes to the routing neurons in neighboring Loihi chips. The neighboring chips in turn further route these spikes to their neighbors, propagating throughout the column as a wavefront of query spikes, advancing one layer of chips per timestep.
The routing neurons in each chip also project to a local population of integrate-and-fire neurons implementing that chip’s similarity calculations over its stored patterns. The spikes activate each subset of the matrix in parallel on different chips throughout the mesh, influencing the temporal integration of all pattern match neurons appropriately as illustrated in Fig. 3. (Fig. 6, MVP).
4.3 Detecting and aggregating match results
As the pattern match neurons integrate to threshold, signifying close matches, they send spikes to hardware spike counters contained within their local embedded x86 processors (Fig. 6, M) for match aggregation. On each timestep, the processors detect candidate pattern matches by nonzero counter values and send these as messages to their neighboring processors in the same propagating wavefront manner as for the query distribution.
Each processor in the wavefront sequence asynchronously aggregates all of the results it receives before communicating the results onwards to the next processor. Any time a processor has sent results, it will stop sending messages and will communicate to the processors before it that they should also stop sending results. The final root processor on the last Loihi chip sends the fully aggregated result back to the host and super host as soon as it has results to send.
The output message from the last processor is an ordered list of the first matches (or more than if there are ties). The ordering directly reflects the order in which matches were found and does not require the host and super host CPUs to do any sorting. Each match also includes the timestep on which it was found, which can be used to identify and break ties for greater recall accuracy.
The super host is responsible for aggregating the final results by merging and filtering the three ordered lists of matches it receives from the Arria 10 ARM hosts, a negligible extra computation. Although our search implementation seamlessly scales up to the entire 768-chip mesh, controlled by a single host, the gain in extra I/O bandwidth from partitioning the Loihi mesh into three host columns outweighs the cost of merging three sequences of matches to one. In fact, due to the highly asymmetric dimensions of each column of Loihi chips (464), the barrier synchronization time per column of 14.3s is only marginally faster than the barrier synchronization time across all 768 (1264) chips, 16.2s.
For latency-optimized searches with coarse temporal discretization of the input window, typically the final timestep will include on the order of tied entries. For best possible recall accuracy, the super host can perform a final -NN search over the final tied entries, where is the number needed to complete a full set of nearest neighbors. Since the number of ties to search is orders of magnitude smaller than the size of the full dataset, this extra postprocessing step adds a small additional latency to the query, which is included in the results that follow.
5 Experimental results
For benchmarking, we follow the procedures described in Aumüller et al. 
. Additionally, we measure and estimate power consumption in order to compare energy expenditures between different implementations. The ground truth is based on the normalized dot product (cosine distance) as computed on a CPU. We validate the algorithm on the Tiny Images dataset, as well as GIST-960  and GloVe .
5.1 Performance evaluation
Our first experiment measures the recall performance of -NN, that is, how well the algorithm returns the same results as the ground truth. In Fig. 7, left, we show the results of searching Tiny Images datasets of varying sizes from 76,800 to one million. For the
case, we classify an input that is randomly chosen from the dataset and pixel-wise corrupted with Gaussian noise (as in Fig.6). For the other cases, we query the dataset with an input that was excluded from the dataset. Recall is calculated as the fraction of the returned data points that are no further from the search key than any data point in the ground truth top- set (Fig. 7, left).
For three one-million pattern datasets, we also evaluated the (1+)-approximate recall performance, which defines an expanded window in which the top nearest neighbors may be found. Fig. 7, right, shows approximate recall as a function of for Tiny Images, GIST-960 and GloVe datasets.
We characterize the system’s query latency over the range of times that Pohoiki Springs responds with the first and th match. Since the neuromorphic algorithm identifies solutions in a temporally ordered manner, the closest match is always found before the last match, and this latency spread increases with increasing dataset size (Fig. 8, right).
Depending on spike traffic, each barrier synchronized timestep of the computation can have a different duration (Fig. 8, left). In the absence of excessive spike activity, the system typically sustains just over 13s per timestep for the 1M-pattern dataset workload and 5.8s per timestep when processing 76,800-pattern datasets. However, slowdowns are observed during other periods. The first slowdown is noticeable near the end of the input window when many of the smaller coefficients above threshold are communicated as spikes, which have to be routed throughout the mesh. Output spikes begin to arrive near timestep 80 indicating nearest neighbors, slowing down the system. More time is needed to collect output spikes for larger . Interestingly, the observed slowdowns are due to the load on Loihi’s embedded x86 cores related to processing the incoming and outgoing spikes, not as a result of congestion in the neuromorphic mesh interconnect or cores.
5.2 Power and energy
The total power of Pohoiki Springs, including power supplies, FPGAs, ARM hosts, ATX motherboard, and Ethernet switch, is measured at the plug while running queries at maximum throughput. Estimates of the different power components for the Loihi chips are obtained by extrapolating measurements on an instrumented board containing 32 Loihi chips and running 76,800-pattern search queries.
Table I provides a breakdown of the Loihi mesh power consumption for a variety of sustained query workloads. Static power is due to leakage when all circuits are fully powered. Almost all leakage can be attributed to the neuromorphic cores, which dominate chip area. The x86 power is dynamic power consumed by the x86 processors, approximately 90% of which is idle power. Neuro power is dynamic power attributed to the neuromorphic cores.
|76,800||3.34 W||2.09 W||1.56 W|
|76,800||3.34 W||2.10 W||1.80 W|
|76,800||3.34 W||2.14 W||1.58 W|
|1M||53.4 W||32.2 W||16.2 W|
|1M||53.4 W||31.7 W||17.1 W|
|1M||53.4 W||31.2 W||10.8 W|
|Pohoiki wall power||1M||258 W|
|CPU1 TDP||*||140 W|
Table II further breaks down the energy consumption of a single search query. A reset phase occurs after each query to prepare the system for the next query. Total dynamic energy therefore includes both the energy required to reset and the energy required to query. The query dynamic energy is further broken down into x86 and Neuro components by isolating the embedded x86 processor workload and measuring it separately.
For extrapolation to the 1M-pattern workload, static power and x86 idle power are assumed to remain constant per chip. Neuro and x86 dynamic energy per chip (in excess of idle activity) are assumed to scale linearly with the number of Loihi timesteps. Reset energy per chip is constant for every query.
Table II also provides the approximate energy that a Core i9-7920X CPU1 requires to perform the same matrix-vector product k-NN search that Pohoiki Springs computes with spiking neurons. The energy is estimated based on the measured runtime of a NumPy float32 implementation (non-batched) multiplied by the CPU’s thermal design power.
[b]Does not include system DRAM energy.
5.3 Dataset processing and programming
Before searches may be executed, a given dataset must be processed and Pohoiki Springs must be configured. This happens over a series of three steps: (1) a dataset preprocessing step to compute the encoding matrix , (2) an index build step to compute the weights to be programmed into Pohoiki Springs, and (3) a programming step that writes all computed weights to the Loihi mesh.
Dataset preprocessing entails computing PCA and ICA on a subset of the dataset. This step optimizes the data encoding for the Pohoiki Springs algorithm and only needs to be computed once per class of data. For data with a few thousand dimensions such as images, typically a subset of 10 thousand or more is needed. Here, we use 20,000 samples for computing the encoding matrix. It takes 68, 71, and 132 seconds to compute the matrices for GloVe, GIST-960 and Tiny Images, respectively, using an Intel Core i9-7920X CPU1.
The index build step involves transforming the given dataset by the encoding matrix , i.e. computing (4), and writing the resulting weight submatrices for each Loihi chip to disk. This is implemented as a batched NumPy computation for each chip’s subset of the dataset. For comparison to conventional -NN implementations, we measure the time required to generate a single chip’s weights and scale the time to the size of the dataset.
In the final programming phase, the encoded dataset weights are loaded and written to each chip in the mesh along with all other routing tables and register values required to configure the -NN application. This is a very slow step due to the current unoptimized state of the Pohoiki Springs I/O subsystem. The programming time for a 192-chip column was measured to be 893 seconds, or about 4.6 seconds per Loihi chip. Incrementally adding additional data points to the system requires on the order of 1ms to encode and program.
5.4 Comparison to state-of-the-art
In Table III, we compare the system’s performance results on GIST-960 to state-of-the-art -NN implementations, Annoy , Inverted file with exact post-verification (IVF) , and Hierarchical Navigable Small World Graph (HNSW) [4, 14]. Comparison results were taken from Aumüller et al. . Note that our algorithm computes angular distance (cosine similarity) and uses angular distance as ground-truth, while the conventional implementations operate on Euclidean distances. As a baseline reference point, we also compare performance results to our PCA/ICA-compressed brute force algorithm executed on an Intel i9-7920X CPU1.
|[b][1cm][c]2cm||[b][1cm][c]2cmRecall||[b][1cm][c]2cmQuery Latency (ms)||[b][1cm][c]1.5cmThroughput ()||[b][1cm][c]1.5cmIndex build time (s)||[b][1cm][c]1.5cmIndex size (kB)||[b][1cm][c]2cmSupports incremental insertions|
|+ batched (x100)||0.0||1.0||2,254||44.4|
The Pohoiki Springs query latency includes 220s of preprocessing on the CPU to compute the ICA-transformed key , Eq. (3), and 300s of CPU postprocessing to exhaustively break ties in the final timestep. These extra times contribute to the search latency but do not affect throughput since they can be computed concurrently with unrelated Pohoiki Springs queries. Conversely, search throughput is degraded by the reset time of 230s on Pohoiki Springs that falls off the latency critical path.
Our results show that neuromorphic -NN classification achieves comparable recall accuracy to the other algorithms, reporting 77-97% of the true top results, with 3-4x better search latency and throughput than Annoy and IVF.
Our algorithm is also favorable in its simplicity, which supports a fast index build time and the smallest memory footprint (Table III, Index size). Hence, while the highly query-optimized HNSW algorithm outperforms Pohoiki Springs in search speed by about 2x, it vastly underperforms it in index build time. Further, because the Pohoiki Springs implementation organizes its index as a simple distributed array of data points, encoded by dense network weights, inserting a new point online during execution is an operation that requires negligible time (feasibly under 1ms).
Fundamentally, the computation of (1) and the subsequent top- search can be parallelized to a very fine level of granularity. This property is difficult to exploit with conventional architectures because communication and processor overhead come to dominate at high levels of concurrency. The Pohoiki Springs neuromorphic architecture supports sparse spiking representations and low-overhead barrier synchronization, and these features can be harnessed to provide a finely parallelized implementation of -NN classification that is fast, scalable, and energy efficient.
6.1 Neuromorphic algorithm and data encoding
Here we propose a simple neuromorphic implementation of -NN classification using a layer of conventional spiking neurons, each one receiving inputs through synapses that represent a data point by the synaptic strength. The algorithmic innovation lies in how the input to these neurons is encoded and combined with the synaptic and neural dynamics to produce the desired computation with minimal spike traffic. We use a latency code in which larger amplitudes come early and spikes representing small amplitudes are suppressed. With this input encoding and leak-less integration, the resulting membrane voltages exactly represent the matches (dot products) at the end of the input window. However, the computation becomes approximate due to discretization and the translation of the membrane voltages into output spikes based on a fixed chosen threshold.
Data preprocessing consists of PCA and ICA for dimensionality reduction and sparse spike encoding. The computation is relatively cheap since it only needs to be computed once on a representative sample of the data. The procedure is optional in cases where the data is already sparse with manageable dimensionality.
The search implementation uses brute-force parallelism to perform a simple dot product computation, compared to the complex hashing and search strategies of conventional state-of-the-art nearest neighbor search algorithms. Such a brute-force neuromorphic implementation achieves efficiency at scale on Pohoiki Springs by taking advantage of the architecture’s fine granularity of distributed, co-located memory and computing elements in combination with rapid synchronization of temporally coded and integrated spike timing patterns.
6.2 Nearest-neighbor search results
Our neuromorphic approximate -NN implementation on Pohoiki Springs uniquely optimizes both index build time and search speed compared to state-of-the-art approximate nearest neighbor search implementations with equal recall accuracy. Although batched implementations on both CPUs and GPUs can boost query throughputs to well beyond the levels evaluated here, up to 1,000-50,000 queries per second , the latencies of those implementations are 100x or more worse.
Additionally, our neuromorphic implementation supports adding new patterns to the search dataset in complexity, on the timescale of milliseconds. Conventionally, only brute force -NN implementations can support pattern insertion, which then comes at the cost of
search latency. For the million-pattern datasets evaluated, this difference represents over 20x slower search speeds. The ability to add to the search database without interrupting online operation may be highly desirable in latency-sensitive settings where the database needs to include points derived from events happening in real time. Such applications could include algorithmic trading of financial assets, security monitoring, and anomaly detection in general.
The neuromorphic approach described also allows for simple adjustments to trade off performance in accuracy for improvements in latency. These adjustments can be made dynamically without requiring hours of index re-building . By stretching or compressing the encoding of the input spike times (which is done on the CPU), as well as adjusting the thresholds of the output neurons, one may dynamically configure -NN search with higher resolution or lower latency, as desired. However, accuracy is limited due to various sources of noise.
One source of noise in the implementation comes from discretization error. The exact timing of input and output spikes are locked to discrete timesteps. With an integration window of 60 timesteps, the dynamic range of each input dimension is approximately six bits. Similarly, output spikes are discretized into time steps, giving finite resolution. Increasing time scale can improve this source of discretization noise, at the cost of longer execution time.
The more significant source of error is the temporal coding of output spikes, which is critical for efficiently identifying the top matches. In the computation, the desired dot product is exactly proportional to a pattern match neuron’s membrane voltage only at the end of the input window. However, in order to search for near matches, the parameters must be tuned to permit spiking at times away from the exact end-of-window, thereby introducing inaccuracies. In general, the thresholds should be tuned so that the pattern match neurons spike near the end of the integration window. Threshold re-tuning is easily executed and can be rapidly broadcast to all cores in the system.
The main shortcoming of the demonstrated implementation is that it only supports a dot product (cosine) distance metric. Many practical -NN applications require Euclidean, Hamming, or other distance metrics. This limits the application space for the neuromorphic implementation in its current form.
The approximate -NN classification implementation developed here exploits some, though certainly not all, of the fundamental neuromorphic properties of Pohoiki Springs. First, it exploits fine-grain hardware parallelism with fully integrated memory and computation. The computation of the closest matches is distributed over Pohoiki Springs’ 100,000 cores in which the patterns themselves are stored. Second, the algorithm uses the timing of events to encode information and to simplify computation. In this case, the multiply-accumulations of a conventional matrix-vector multiply operation are replaced by event-driven weight lookups and integration over time. Finally, the implementation intentionally introduces and exploits computational sparsity. The algorithm transforms the input data representations to prefer zero components over nonzero ones, which the hardware then exploits by implicitly skipping all computation related to the zeros.
On the other hand, this example does not exploit many other important neuromorphic properties provided by Loihi. All weights and network parameters in the system are precomputed offline and remain static once loaded into the system. This leaves the plasticity features of the hardware untouched. The computation is shallow and feed-forward, with the neuromorphic domain only responsible for computing a single matrix-vector product. Search latency and dynamic energy remain dominated by von Neumann processing, which is not ideal. In general, we expect greater gains as a greater proportion of the overall application falls within the neuromorphic domain, especially as recurrent feedback loops are introduced to accelerate convergence and to support pattern superposition. Such enhancements, the focus of ongoing work, promise to greatly boost the network’s storage capacity and performance.
Some aspects of the results suffer from a lack of optimization at both hardware and software levels, a consequence of the early prototype status of the Pohoiki Springs system. Full utilization of the system resources would increase pattern capacity by at least 6x. Programming times could be reduced by well over 10x with optimized software and I/O infrastructure. Much of the algorithmic latency and energy is dominated by the relatively trivial ancillary computation mapped to Loihi’s embedded x86 processors, which were only minimally customized for their role in neuromorphic interfacing. Loihi itself is research silicon and factors of performance improvement are feasible with design optimizations, especially relating to multi-chip scaling.
Nevertheless, the -NN implementation prototyped here as the first application to run on Pohoiki Springs compares favorably to state-of-the-art solutions running on highly mature and optimized conventional computing systems. The nearest neighbor search problem is central in a large variety of applications, and this is just one of a wide space of algorithms supported by Loihi and Pohoiki Springs. This suggests a promising future for neuromorphic systems as the technology is further matured and advanced to production standards.
All CPU performance measurements referenced in this work were obtained from two systems. The systems, as annotated in the text, have the following properties:
CPU1: Intel Core i9-7920X CPU (12 cores, Hyper Threading enabled, 2.90GHz, 16.5 MB cache) with 128 GB RAM. OS: Ubuntu 16.04.6 LTS. Python version 3.5.2, NumPy version 1.18.2. Energy measurements were obtained using Intel SoC Watch version 2.7.0 over a duration of 120 seconds with continuously repeating workloads.
CPU2: Used to obtained measurements referenced from . As described in that work, evaluations were performed in Docker containers on Amazon EC2 c5.4xlarge instances equipped with Intel Xeon Platinum 8124M CPU (16 cores, 3.00 GHz, 25 MB cache) and 32GB of RAM
All software run on Pohoiki Springs used a development version of Intel’s Nx SDK advanced from release 0.9.5.rc1.
With the exception of CPU2 measurements quoted from , all performance results are based on testing as of March 2020 and may not reflect all publicly available security updates. No product can be absolutely secure.
-  (2015) Practical and Optimal LSH for Angular Distance. NIPS 28, pp. 1–9. Cited by: §3.
-  (2017) ANN-benchmarks: a benchmarking tool for approximate nearest neighbor algorithms. In International Conference on Similarity Search and Applications, pp. 34–49. Cited by: §3, §5.4, TABLE III, §5, 2nd item, §7.
-  (2018) Annoy: approximate nearest neighbors in c++/python. Note: Python package version 1.13.0 External Links: Cited by: §5.4.
-  (2013) Engineering efficient and effective non-metric space library. In Similarity Search and Applications - 6th International Conference, SISAP 2013, A Coruña, Spain, October 2-4, 2013, Proceedings, N. R. Brisaboa, O. Pedreira, and P. Zezula (Eds.), Lecture Notes in Computer Science, Vol. 8199, pp. 280–293. External Links: Cited by: §5.4.
-  (2002) Similarity Estimation Techniques from Rounding Algorithms. STOC, pp. 380–388. External Links: Cited by: §3.
-  (1998) Enhanced nearest neighbour search on the r-tree. ACM SIGMOD Record 27 (3), pp. 16–21. Cited by: §3.
-  (2018) Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38 (1), pp. 82–99. Cited by: §2.
-  (2012) Approximate Nearest Neighbor : Towards Removing the Curse of Dimensionality. Theory of Computing 8, pp. 321–350. External Links: Cited by: §3.
-  (2012) Computing nearest-neighbor fields via propagation-assisted kd-trees. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 111–118. Cited by: §3.
-  (1995) Pattern recognition computation using action potential timing for stimulus representation. Nature 376 (6535), pp. 33–36. External Links: Cited by: §3.2.
Fast and Robust Fixed-Point Algorithms for Independent Component Analysis. IEEE Transactions on Neural Networks 10 (3), pp. 626–634. Cited by: §3.1.
-  (2011-01) Product Quantization for Nearest Neighbor Search. IEEE Transactions on Pattern Analysis and Machine Intelligence 33 (1), pp. 117–128. External Links: Cited by: §3.1, §5.
-  (2017) Billion-scale similarity search with gpus. CoRR abs/1702.08734. External Links: Cited by: §5.4, §6.2.
-  (2016) Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. CoRR abs/1603.09320. External Links: Cited by: §5.4, §6.2.
Glove: global vectors for word representation.
Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532–1543. Cited by: §3.1, §5.
-  (1996) Speed of processing in the human visual system.. Vol. 381. External Links: Cited by: §3.2.
80 million tiny images: a large data set for nonparametric object and scene recognition. IEEE transactions on pattern analysis and machine intelligence 30 (11), pp. 1958–1970. Cited by: §5.
-  (1958) The computer and the brain. Yale University Press, reprint 2012. Cited by: §1.