DeepAI
Log In Sign Up

On the computational power and complexity of Spiking Neural Networks

The last decade has seen the rise of neuromorphic architectures based on artificial spiking neural networks, such as the SpiNNaker, TrueNorth, and Loihi systems. The massive parallelism and co-locating of computation and memory in these architectures potentially allows for an energy usage that is orders of magnitude lower compared to traditional Von Neumann architectures. However, to date a comparison with more traditional computational architectures (particularly with respect to energy usage) is hampered by the lack of a formal machine model and a computational complexity theory for neuromorphic computation. In this paper we take the first steps towards such a theory. We introduce spiking neural networks as a machine model where—in contrast to the familiar Turing machine—information and the manipulation thereof are co-located in the machine. We introduce canonical problems, define hierarchies of complexity classes and provide some first completeness results.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

02/17/2022

Implementing Spiking Neural Networks on Neuromorphic Architectures: A Review

Recently, both industry and academia have proposed several different neu...
05/23/2022

Memory-enriched computation and learning in spiking neural networks through Hebbian plasticity

Memory is a key component of biological neural systems that enables the ...
10/06/2022

Spiking neural networks for nonlinear regression

Spiking neural networks, also often referred to as the third generation ...
02/07/2018

Going Deeper in Spiking Neural Networks: VGG and Residual Architectures

Over the past few years, Spiking Neural Networks (SNNs) have become popu...
04/29/2019

Neuromorphic Acceleration for Approximate Bayesian Inference on Neural Networks via Permanent Dropout

As neural networks have begun performing increasingly critical tasks for...
11/29/2019

A spiking neural algorithm for the Network Flow problem

It is currently not clear what the potential is of neuromorphic hardware...
02/25/2021

A New Neuromorphic Computing Approach for Epileptic Seizure Prediction

Several high specificity and sensitivity seizure prediction methods with...

1. Introduction

Moore’s law (Moore75) stipulates that the number of transistors in integrated circuits (ICs) doubles about every two years. With transistors becoming faster IC performance doubles every 18 months, at the cost of increased energy consumption as transistors are added (Shalf15). Moore’s law is slowing down and is expected111https://www.economist.com/technology-quarterly/2016-03-12/after-moores-law. to end by 2025. Traditional (“Von Neumann”) computer architectures separate computation and memory by a bus, requiring both data and algorithm to be transferred from memory to the CPU with every instruction cycle. This has been described, already in 1978, as the Von Neumann bottleneck (Backus78). While CPUs have grown faster, transfer speed and memory access lagged behind (Hennessy11), making this bottleneck an increasingly difficult obstacle to overcome.

In summary, while more data than ever before is produced, we are simultaneously faced with the end of Moore’s law, limited performance due to the Von Neumann bottleneck, and an increasing energy consumption (with corresponding carbon footprint) (OakRidge16). These issues have accelerated the development of several generations of so-called neuromorphic hardware (Mead90; Indivieri11; Davies18)

. Inspired by the structure of the brain (largely parallel computations in neurons, low power consumption, event-driven communication via synapses) these architectures co-locate computation and memory in artificial (spiking) neural networks. The spiking behavior allows for potentially energy-lean computations

(Maass14) while still allowing for in principle any conceivable computation (Maass96)

. However, we do not yet fully understand the potential (and limitations) of these new architectures. Benchmarking results are suggesting that event-driven information processing (e.g. in neuromorphic robotics or brain-computer-interfacing) and energy-critical applications might be suitable candidate problems, whereas ‘deep’ classification and pattern recognition (where spiking neural networks are outperformed by convolutional deep neural networks) and applications that value precision over energy usage may be less natural problems to solve on neuromorphic hardware. Although several algorithms have been developed to tackle specific problems, there is currently no insight in the potential and limitations of neuromorphic architectures.

The emphasis on energy as a vital resource, in addition to the more traditional time and space, suggests that the traditional models of computation (i.e., Turing machines and Boolean circuits) and the corresponding formal machinery (reductions, hardness proofs, complete problems etc.) are ill-matched to capture the computational power of spiking neural networks. What is lacking is a unifying computational framework and structural complexity results that can demonstrate what can and cannot be done in principle with bounded resources with respect to convergence time, network size, and energy consumption (Haxhimusa14). Previous work is mostly restricted to variations of Turing machine models within the Von Neumann architecture (Graves14) or energy functions defined on threshold circuits (Uchizawa09) and as such unsuited for studying spiking neural networks. This is nicely illustrated by the following quote:

“It is …likely that an entirely new computational theory paradigm will need to be defined in order to encompass the computational abilities of neuromorphic systems” (OakRidge16, p.29)

In this paper we propose a model of computation for spiking neural network-based neuromorphic architectures and lay the foundations for a neuromorphic complexity theory. In Section 2 we will introduce our machine model in detail. In Section 3 we further elaborate on the resources time, space, and energy relative to our machine model. In Section 4 we will explore the complexity classes associated with this machine model and derive some basic structural properties and hardness results. We conclude the paper in Section 5.

2. Machine model

In order to abstract away from the actual computation on a neuromorphic device, in a similar vein as the Turing machine acts as an abstraction of computations on traditional hardware architectures, we introduce a novel notion of computation based on spiking neural networks. We will first elaborate on the network model and then proceed to translate that to a formal machine model.

2.1. Spiking neural network model

We will first introduce the specifics of our spiking neural network model, which is a variant of the leaky integrate-and-fire model introduced by Severa and colleagues at Sandia National Labs (Severa16). This model defines a discrete-timed spiking neural network as a labeled finite digraph comprised of a set of neurons as vertices and a set of synapses as arcs. Every neuron is a triple representing respectively threshold, reset voltage, and leakage constant, while a synapse is a -tuple for the pre-synaptic neuron, post-synaptic neuron, synaptic delay and weight respectively. We will use notation as a shorthand to refer to specific synapses and shorthands and to refer to the synaptic delay and weight of a specific synapse.

The basic picture is thus that any spikes of a neuron are carried along outgoing synapses to serve as inputs to the receiving neurons . The behavior of a spiking neuron at time is typically defined using its membrane potential

which is the integrated weighted sum of the neuron’s inputs (taking into account synaptic delay) plus an additional bias term. Whether a neuron spikes or not at any given time is dependent on this membrane potential, either deterministically (i.e., the membrane potential acts as a threshold function for the spike) or stochastically (i.e., the probability of a spike being released is proportional to the potential); in this paper we assume deterministic spike responses. A spike

is abstracted here to be a singular discrete event, that is, if a spike is released by neuron at time and otherwise. Figure 2 gives an overview of this spiking neuron model.

One can also define the spiking behavior of a neuron programmatically rather than through its membrane potential, involving so-called spike trains, i.e. predetermined spiking schedules. Importantly, such neurons allow for a means of providing the input to a spiking neural network. Furthermore, for regular (non-programmed) neurons the bias term can be replaced by an appropriately weighted connection stemming from a continuously firing programmed neuron; for convenience this bias term will thus be omitted from the model. Figure 1 introduces our notational conventions that we use for graphically depicting networks, along with a few simple networks as an illustration. As a convention, unless otherwise depicted, neuron and synapse parameters have their default values and .

Figure 1. Notational conventions for (top to bottom on the left) a regular neuron, a programmed neuron, dedicated notation for programmed neurons firing once at timestep , and dedicated acceptance and rejection neurons. To the right we show simple circuits realizing a continuously firing neuron, a clock neuron firing every time steps, and a temporal representation of a natural number relative to a clock.

 

Figure 2. A spiking neuron model with deterministic spiking behavior, describing the membrane potential of a leaky integrate-and-fire neuron over time, based on the integrated weighted sum of incoming post-synaptic potentials. We enforce that the membrane potential is non-negative. Spikes are emitted when the membrane potential reaches its threshold and arrive at post-synaptic neurons with synaptic delay .

 

For every spiking neural network we require the designation of two specific neurons as the acceptance neuron and the rejection neuron . The idea is that the firing of the corresponding neuron signifies acceptance and rejection respectively, at which point the network is brought to a halt. In the absence of either one of those neurons, we can impose a time constraint and include a new neuron which fires precisely when or (whichever is present) did not fire within time, thus adding the missing counterpart. In this way, we ensure that this model is a specific instantiation of Wolfgang Maass’ generic spiking neural network model that was shown to be Turing complete (Maass96); hence, these spiking neural networks can in principle (when provided the necessary resources) compute anything a Turing machine can. More interesting is the question whether we can design smart algorithms that minimize the use of resources, for example, minimize energy usage within given bounds on time and network size. In order to answer this question we need to define a suitable formal abstraction of what constitutes a computational problem on a spiking neural network.

2.2. Canonical problems

Canonical computational problems on Turing machines typically take the following form: “Given machine and input on its tape, does accept using resources at most ”? Here, is the language that should accept, and the job of is to decide whether . To translate such problems to a spiking neural network model one needs to define the machine model , the resources that may use, how the input is encoded and what it means for to accept the input using resources .

This is a non-trivial problem. In a Turing machine the input is typically taken to be encoded in binary notation and written on the machine’s tape, while the algorithm for accepting inputs is represented by the state machine of . However, in spiking neural networks both the problem input and the algorithm operating on it are encoded in the network structure and parameters. While the most straightforward way of encoding the input is through programming a spike train on a set of input neurons, in some cases it might be more efficient to encode it otherwise, such as at the level of synaptic weights or even delays. In that sense a spiking neural network is different from both a Turing machine and a family of Boolean circuits as depicted in Table 1.

Character of device(s) Input representation Resources Canonical problem
Turing Machine One machine deciding all instances . Input is presented on the machine’s tape. Time, size of the tape, transition properties, acceptance criteria. Does decide whether using resources at most ?
Family of Boolean circuits One circuit for every input size . Input is represented as special gates. Circuit size and depth, size and fan-in of the gates. Does, for each , the corresponding circuit decide whether using resources at most ?
Collection of SNNs One network for every input or set of inputs . Input is encoded in the network structure and/or presented as spike trains on input neurons. Network size, time to convergence, total number of spikes. Is there a resource-bounded Turing Machine that, given , generates (using resources ) which decides whether using resources at most ?
Table 1. Overview of machine models: Turing machines, Boolean circuits, and families of spiking neural networks.

Hence, we introduce a novel computational abstraction, suitable for describing the behavior of neuromorphic architectures based on spiking neural networks. We postulate that a network encodes both the input and the algorithm deciding whether . What it means to decide a problem using a spiking neural network now becomes the following: that there is an -resource-bounded Turing machine that generates a spiking neural network for every input , such that decides whether using resources at most . Note that in this definition the workload is shared between the Turing machine and the network , and that the definition naturally allows for trading off generality of the network (accepting different inputs by the same network) and generality of the machine (generating different networks for each distinct input), with the traditional Turing machine and family of Boolean circuits being special cases of this trade-off. We can informally see the Turing machine as a sort of pre-processing computation generating the spiking neural network and then deferring the actual decision to accept or reject the input to this network. We will use the notation to refer to the class of decision problems that can be decided in this way.

There is typically a trade-off between generality and efficiency of a network. Figure 3 provides a simple comparison between three implementations of the Array Search-problem: given an array of integers and a number , does contain ? Note that in the rightmost example a ‘circuit approach’ is emulated. There is no straightforward way to simulate the entire computation for arrays of arbitrary size in the network other than simulating the behaviour of the machine and its input as per the proof in (Maass96).

(a) All computation in the network
(b) The value offered to the network as input
(c) Both the value and the array offered to the network as input
Figure 3. Three spiking neural networks designed to decide whether an array of natural numbers contains . Note that in network 2(a) both and as well as the parallel comparison are encoded in the network; in network 2(b) the search value is offered as input (using a spike train consisting of a single spike with delay ), and in network 2(c) both the search value and the integers in the array are offered as input to the network, while the size of the array is fixed. The number of spikes used in the computation is respectively , , and ; the generality of the network increases but this comes at the prize of the increasing number of spikes in the computation.

In addition to the ‘pre-processing’ model we can also allow an iterative interaction between and an oracle capable of deciding whether a spiking neural network accepts, such that the computation carried out by is interleaved with oracle calls whose results can be acted on accordingly. Before we can properly define this interactive model of neuromorphic computation, we will first discuss the class in further detail. In Section 4 we will cover the formal aspects involved in these definitions; we start by considering the resources that we wish to allocate to these machines.

3. Resources

We denote the resource constraints of the Turing machine with the tuple . We allow the decision of the network to take resources ; this can be further specified to be a tuple , referring to the number of time steps may use, the total network size , and total number of spikes that is allowed to use, all as a function of the size of the input . Note that in practice since any neuron can fire at most once per time step. Furthermore, note that similarly is upper bounded by , as for example we cannot in polynomial time construct a network with an exponential number of neurons. We assume in the remainder that the constraints can be described by their asymptotic behavior, and in particular that they are closed under scalar and additive operations; we will describe and as being well-behaved if they adhere to this assumption. (To clarify, here we restrict ourselves to considering only deterministic resources for both and , just as we consider only deterministic membrane potential functions.)

Observe that we really need the pre-processing to be part of the definition of the model for neuromorphic computation to meaningfully define resource-bounded computations, as we are allowed in principle to define a unique network per instance . Otherwise, the mapping between and could be the trivial and uninformative mapping:

3.1. Clock and meter

We will assume that all Turing machines have access to a clock and a ruler and enter their rejection state immediately when these bound are violated (Hartmanis65). In a similar vein, it is possible to build into a spiking neural network both a meter to monitor energy usage as well as a timer which counts down the allotted time steps, though they will not be part of our baseline assumption. Given an upper bound on the number of spikes, we can construct an energy counter neuron with synapses for all , and , where applicable. This ensures that if at some time step the permitted number of spikes has been reached without accepting or rejecting (which itself involves a spike from the corresponding neuron), from the next time step on the energy counter will inhibit the acceptance neuron and excite the rejection neuron if present. Similarly, given an upper bound on the number of time steps, we can include a programmed timer neuron which fires once at the first time step, along with synapses , and where applicable (Figure 4). Observe that these constructions add only two neurons, a proportionate number of synapses, and (in the presence of a rejection neuron) only a few additional spikes expended, hence the network size and in particular its construction time remain the same asymptotically.

Figure 4. Adding a timer and a meter to an arbitrary spiking neural network

 

4. Structural complexity

Now that we have specified what we mean by the resources and , it is time to take a closer look at the class , starting with some initial observations. To begin with, it makes little sense to allow the pre-processing to operate with at least as much resources as the spiking neural network, since otherwise the execution of the spiking neural network can be simulately classically; this remark is illustrated in Theorem 4.1 below. For this reason we typically choose to be only polynomial time and polynomial or even logarithmic space, corresponding to the classes and respectively. When the constraints are such that characterizes familiar complexity classes we will use the common notation for that class from here on; as an abuse of notation we will also use this notation as a shorthand for the resources themselves.

Theorem 4.1 ().

whenever involves at most polynomial time constraints.

Proof.

As is obvious, we focus on proving the inclusion in the other direction. The crucial observation is that for a Turing machine with polynomial time constraints it is impossible to construct a larger than polynomial network, rendering the space constraints actually imposed moot. Recalling our earlier observation that the energy consumption of a spiking neural network is upper bounded in terms of (the product of) its size and time constraints, this implies that the spiking neural network constructed is effectively polynomially bounded (or worse) on all resources. Now it suffices to show that a deterministic Turing machine can simulate in polynomial time the execution of a spiking neural network of polynomial size for at most polynomial time. This can be done by explicitly iterating over the neurons for every time step, determining whether they fire and scheduling the transmission of this spike along the outgoing synapses, until the network terminates or the time bounds are reached. By thus absorbing the decision procedure carried out by the network into the classical polynomial-time computation carried out by the machine we arrive at the stated inclusion. ∎

This theorem serves as a reminder that spiking neural networks are no magical devices: while there is a potential efficiency gain, mostly in terms of energy usage relative to computations on traditional hardware (only), neuromorphic computations with at most polynomial time constraints cannot achieve more than their classical counterparts. It remains to be determined to what extent the classes exhibit any hierarchical behavior based on the constraints : in particular, it is still unclear whether there is an energy hierarchy analogous to the classical time hierarchy. We can however note that for well-defined resource contraints the classes are closed under operations such as intersection and complement, since spiking neural networks themselves are, so that decision procedures can be adjusted or combined at the network level.

Observe that using different resource constraints and we can define a lattice of complexity classes , including such degenerate cases as where the constructed network is only finitely dependent on the actual input (and thus can be constructed in constant time), and . It is therefore natural to consider the notions of reduction and hardness in this context, which is what we will do next.

4.1. Completeness for

In order to arrive at a canonical complete problem for the class , it makes sense to consider the analogy with other models of computation, where one asks whether the given procedure (be it machine, circuit or otherwise) accepts the provided input. Since even for the class it is not a spiking neural network but a Turing machine which controls how the input is handled, the resulting candidate for a complete problem for this class will involve the latter and not the former. This means that to distinguish this problem from its classical equivalent we must include the promise that the Turing machine is indeed of the kind associated with the class , in that it generates an -bounded spiking neural network using resources 222This construction is similar to the one required for the class associated with probabilistic Turing machines.. In other words, we claim that the following problem is complete under polynomial-time reductions for the promise version of the class .

-Halting

Instance: Turing machine along with input string .

Promise: is an -machine.

Question: Does accept ?

Theorem 4.2 ().

-Halting is complete under polynomial-time reductions for the promise version of .

Proof.

Membership of this problem is established as follows: with a universal machine one can take the machine and simulate it on the input . If the machine is indeed an machine as per the promise, then this simulation will succeed within the permitted resource bounds and we can simply return the answer given by . In case the promise fails to hold, we only need to ensure that the (unsuccessful) simulation does not exceeds the resource bounds, since it is otherwise irrelevant which response is ultimately given. For the hardness of this problem, we observe that every problem in is by definition solvable by an -machine, hence the straightforward reduction from any such problem to -Halting consists of taking the input and passing it along to -Halting accompanied by a particular -machine which decides the problem. ∎

However, for particular assignments of we can actually replace the Turing machine by a spiking neural network and still end up with a complete (promise) problem. We will illustrate this construction for being linear time (and space); the same result also holds for corresponding to and under polynomial-time and logspace reductions respectively.

-Network Halting

Instance: Network along with input string .

Promise: terminates within resource bounds expressed as a function of .

Question: Does accept?

Theorem 4.3 ().

-Network Halting is complete under linear-time reductions for the promise version of
.

Proof.

Membership follows from the observation that a Turing machine can in linear time discard the input string , such that what it is left with is a network promised to be -constrained that accepts precisely when does as it is itself. To prove hardness we reduce -Halting to -Network Halting. Let be an instance of the former. By simulating the application of on and replacing it with the resulting network (which by the promise can be done in linear time), we obtain an instance of -Network Halting where the promise for is inherited from that for and the decision of is that of on by definition. ∎

This completeness result shows that for those choices of that we were likely to consider anyways (cf. the remark at the beginning of this section) we are justified in taking spiking neural networks as computationally primitive in a sense relevant for our treatment. In particular, this allows us to round off our discussion by exploring the interactive model of neuromorphic computation.

4.2. Interactive computation

We will formalize the interactive model of neuromorphic computation in terms of Turing machines equipped with an oracle for the relevant class of spiking neural networks. This involves augmenting a deterministic Turing machine with a query tape, an oracle-query state, and an oracle-result state. We can then select the problem -Network Halting for our choice of and to serve as an oracle to our machine. Now when a machine with such an oracle enters the oracle-query state with on its query tape it proceeds to the oracle-result state, at which point it will replace the contents with if accepts and with if rejects (given that the promise holds; the outcome otherwise returned is unspecified). With a slight abuse of notation, we can thus define to be the class of decision problems that can be solved by a Turing machine with resource constraints equipped with an oracle for -Network Halting. It follows immediately that is a superclass of , though again the exact relations between these two kinds of classes and between these neuromorphic complexity classes and the classical complexity classes remain to be determined. In closing we can however offer an example of a potential use for the interactive model of neuromorphic computation.

Example 4.4 ().

Suppose we are interested in the behavior of -complete problems on neuromorphic oracle Turing machines. Given that such problems are assumed to be inherently serial and cannot be computed with only a logarithmic amount of working memory, one might suggest to look at a suitable trade-off between computations on a regular machine and on a neuromorphic device. One way of doing this would be to constrain the working memory for the Turing machine to be logarithmic in the input size, so that characterizes the complexity class . Then if all resources are linear in the size of the input, we obtain the complexity class . In a related paper we will show that indeed the -complete Network Flow problem resides in this class (Ali19).

5. Conclusion

In this paper we proposed a machine model to assess the potential of neuromorphic architectures with energy as a vital resource in addition to time and space. We introduced a hierarchy of computational complexity classes relative to these resources and provided some first structural results and canonical complete problems for these classes.

We already hinted at some future structural complexity work, most urgently an energy-analogue for the time complexity hierarchy and a notion of amortization of resources. The latter is crucial when considering local changes to the network, such as adapting the weights when learning, or when using a network with a set of spike trains rather than recreating everything from scratch.

In addition, providing concrete hardness proofs, as well as populating classes using neuromorphic algorithms, should be high on the agenda for the neuromorphic research community.

References