Learning spatiotemporal signals using a recurrent spiking network that discretizes time

07/20/2019 ∙ by Amadeus Maes, et al. ∙ 0

Learning to produce spatiotemporal sequences is a common task the brain has to solve. While many sequential behaviours differ superficially, the underlying organization of the computation might be similar. The way the brain learns these tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network drives a read-out layer. Plastic synapses follow common Hebbian learning rules. The dynamics of the recurrent network is constrained to encode time while the read-out neurons encode space. Space is then linked with time through Hebbian learning. Here we demonstrate that the model is able to learn spatiotemporal dynamics on a timescale that is behaviorally relevant. Learned sequences are robustly replayed during a regime of spontaneous activity.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 6

page 8

page 18

page 21

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Abstract

Learning to produce spatiotemporal sequences is a common task the brain has to solve. While many sequential behaviours differ superficially, the underlying organization of the computation might be similar. The way the brain learns these tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network drives a read-out layer. Plastic synapses follow common Hebbian learning rules. The dynamics of the recurrent network is constrained to encode time while the read-out neurons encode space. Space is then linked with time through Hebbian learning. Here we demonstrate that the model is able to learn spatiotemporal dynamics on a timescale that is behaviorally relevant. Learned sequences are robustly replayed during a regime of spontaneous activity.

Author summary

The brain has the ability to learn flexible behaviours on a wide range of time scales. Previous studies have successfully build spiking network models that learn a variety of computational tasks. However, often the plasticity involved is not local and rather mathematically optimizes a cost function. Here, we investigate a model using typical Hebbian plasticity rules for a specific computational task: spatiotemporal sequence learning. The architecture separates time and space into two different parts and this allows Hebbian learning to bind space to time. Importantly, the time component is encoded into a recurrent network which exhibits sequential dynamics on a behavioural time scale. This network is then used as an engine to drive spatial read-out neurons. We demonstrate that the model can learn interesting spatiotemporal spiking dynamics and replay this dynamics robustly.

Introduction

Neuronal networks perform flexible computations on a wide range of time scales. While individual neurons operate on the millisecond time scale, behavior typically spans from a few milliseconds to hundreds of milliseconds and longer. Building functional models that bridge this time gap is of increasing interest [Abbott et al., 2016], especially now that the activity of many neurons can be recorded simultaneously [Jun et al., 2017, Maass, 2016]. Many tasks and behaviors essentially have to produce spatiotemporal sequences. For example, songbirds produce their songs through a specialized circuit. Neurons in the HVC nucleus burst sparsely at very precise times to drive the robust nucleus of the arcopallium which in its turn drives motor neurons [Hahnloser et al., 2002, Leonardo and Fee, 2005]. In addition, sequential neuronal activity is recorded in various brain regions for different motor tasks [Pastalkova et al., 2008, Itskov et al., 2011, Harvey et al., 2012, Peters et al., 2014, Adler et al., 2019]. While the different tasks possibly involve different sets of muscles, the underlying computation on a more fundamental level might be similar [Rhodes et al., 2004].

Theoretical and computational studies have shown that synaptic weights of recurrent networks can be set appropriately such that dynamics on a wide range of time scales is produced [Litwin-Kumar and Doiron, 2014, Zenke et al., 2015, Tully et al., 2016]. In general, these synaptic weights are engineered to generate a range of interesting dynamics: slow-switching dynamics [Schaub et al., 2015] and different types of sequential dynamics [Chenkov et al., 2017, Billeh and Schaub, 2018, Setareh et al., 2018]. However, it is unclear how the brain learns these dynamics as most of the current approaches use non biologically-plausible ways to set or "train" the synaptic weights. For example, FORCE training [Sussillo and Abbott, 2009, Laje and Buonomano, 2013, Nicola and Clopath, 2017]

or backpropagation through time use non-local information either in space or in time.

Here, we propose to learn a task in a spiking recurrent network which drives a read-out layer. All synapses are plastic under typical Hebbian learning rules [Clopath et al., 2010, Litwin-Kumar and Doiron, 2014]. We train the recurrent network to generate a sequential activity which serves as a temporal backbone. This sequential activity is generated by clusters of neurons activated one after the other. As clusters are highly recurrently connected, each cluster undergoes reverberating activity that lasts longer than neural time scale. Therefore, the sequential activation of the clusters leads to a sequence long enough to be behaviourally relevant. This allows us to bridge the neural and the behavioral time scales. We then use Hebbian learning to learn the target spatiotemporal dynamics in the read-out neurons. Thus, the recurrent network encodes time and the read-out neurons encode ’space’. ’Space’ can mean spatial position, but it can also mean frequency or a more abstract state space. Similar to the liquid state-machine, we can learn different dynamics in parallel in different read-out populations [Jaeger et al., 2007]. We show that learning in the recurrent network is stable during spontaneous activity and that the model is robust to synaptic failure.

Results

In the following we will build our model step-by-step in a bottom-up manner.

Model architecture

The model consists of two separate parts (Fig 1A). The first part is a recurrent network which serves to encode time in a discretized manner. The second is a read-out layer. The read-out layer learns a multivariate signal of time, . The neurons in the read-out layer encode the different dimensions of the signal, . Time is discretized in the recurrent spiking network by clusters of excitatory neurons, . This means that at each time only a subset of neurons spike, i.d. the neurons belonging to the same cluster. Given that the first cluster is active during time interval , cluster will be active during time interval . These excitatory clusters drive the read-out neurons through all-to-all feedforward connections. At time the read-out neurons are solely driven by the neurons in cluster , with some additional background noise. This discretization enables Hebbian plasticity to bind the read-out neurons to the neurons active in the relevant time-bin. The read-out neurons are not interconnected and receive input during learning from a set of excitatory supervisor neurons. In previous models, the learning schemes are typically not biologically plausible because the plasticity depends on non-local information. Here however, we use the voltage-based STDP plasticity rule at the excitatory to excitatory neurons (Fig 1B). This is paired with weight normalization in the recurrent network (Fig 1C) and weight dependent potentiation in the read-out synapses (Fig 1D). Additionally, inhibitory plasticity [Vogels et al., 2011] prevents runaway dynamics.

Figure 1: (A.) The recurrent network consists of both inhibitory (in blue) and excitatory (in red) neurons. The connectivity is sparse in the recurrent network. The temporal backbone is established in the recurrent network after a learning phase. Inset: zoom of recurrent network showing the macroscopic recurrent structure after learning, here for clusters. The excitatory neurons in the recurrent network project all-to-all to the read-out neurons. The read-out neurons are not interconnected. (B.) All excitatory to excitatory connections are plastic under the voltage-based STDP rule (see Methods for details). The red lines are spikes of neuron (top) and neuron (bottom). When neurons and are very active together, they form bidirectional connections strengthening both and . Connections are unidirectionally strengthened when neuron fires before neuron . (C.) The incoming excitatory weights are normalized in the recurrent network, i.d. the sum of all incoming excitatory weights is kept constant. (D.) Potentiation of the plastic read-out synapses is linearly dependent on the weight. This gives weights a soft upper bound.

A recurrent network that encodes discrete time

Learning happens in a two-stage process. In the first stage, a feedforward weight structure is embedded into the recurrent network. The excitatory neurons are divided into disjoint clusters and neurons in the same cluster share the same input. The recurrent network is initialized as a balanced network with random connectivity. To embed a feedforward structure, the network is stimulated in a sequential manner (Fig 2A). Neurons in cluster each receive external Poisson spike trains (rate of k for m). After this, there is a time gap where no clusters receive input ( m). This is followed by a stimulation of cluster . This continues until the last cluster is reached and then it links back to the first cluster (i.e. a circular boundary condition). During the stimulation, neurons in the same cluster fire spikes together, which will strenghten the intra-cluster connections bidirectionally through the voltage-based STDP rule [Clopath et al., 2010, Ko et al., 2013]. Additionally, there is a pre/post pairing between adjacent clusters. The weights from cluster to cluster strenghten unidirectionally (Fig 2B). When the time gap between sequential stimulations is increased during the training phase, there is no pre/post pairing between clusters anymore. This leads to slow-switching dynamics as opposed to sequential dynamics [Litwin-Kumar and Doiron, 2014, Schaub et al., 2015] (Fig S2). After the weights have converged, the external sequential input is shut-down and spontaneous dynamics is simulated. External excitatory Poisson spike trains without spatial or temporal structure drive the spontaneous dynamics. During that time, the sequence of the clusters reactivates spontaneously and ensures that both the intra-cluster and the feedforward connections remain strong. The connectivity pattern is therefore stable (Fig S3

). The feedforward weight embedding changes the spectrum of the recurrent weight matrix. The dominant eigenvalues lay in a circle in the complex plane (Fig 2C). Analysis of a simplified linear rate model shows that the imaginary parts linearly depend on the strength of the feedforward embedding (see supplementary material, Fig

S1). Intuitively, this means that the imaginary parts of the dominant eigenvalues determine the period of the sequential activity (Fig S3). Under spontaneous activity, each cluster is active for about m, which is due to the recurrent connectivity within the cluster. A large adaptation current counteracts the recurrent reverberating activity to turn the activity reliably off. Therefore, as each cluster is spontaneously active in a sequence, that leads to a sequence length that reaches behavioural time scales (Fig 2D). In summary, the feedforward block connectivity structure embedded in the recurrent network results in a sequential dynamics that discretizes time.

Figure 2: Learning a sequential dynamics that is stable under plasticity. (A.) The excitatory neurons receive sequential clustered inputs. (B.) The weight matrix after training shows the feedforward embedding on the upper diagonal (only the first five clusters displayed), e.g. neurons in cluster 1 are highly bidirectionally connected and also project to neurons in cluster 2, same for cluster 2 to cluster 3 etc. (C.) The spectrum of the full weight matrix after training. The dominant eigenvalues lay in a circle in the complex plane indicating the feedforward embedding on top of the diagonal clustering. (D.) Raster plot of the total network consisting of excitatory (in red) and inhibitory (in blue) neurons. The spontaneous dynamics exhibits a stable periodic trajectory. The excitatory clusters discretize time and the network has a period of about m. Inset: zoom of the raster plot.

Learning a non-markovian sequence

After the temporal backbone in the recurrent network is established, we now learn a simple yet non-markovian sequence in the read-out neurons. During training, the read-out neurons receive additional input from supervisor neurons and from interneurons (Fig 3A). The supervisor neurons receive external Poisson input. The rate of the Poisson input is modulated by the target sequence that needs to be learned (Fig 3B). The target sequence is composed of different states (, or ) that are activated in the following order . This is a non-markovian state sequence because the transition from state requires knowledge about the previous state. Each neuron in the supervisor is encoding one state. This task is non-trivial and it is unclear how to solve this problem in a generic recurrent spiking network with local plasticity rules. However, separating time and space solves this in a natural way. Our recurrent network (Fig 2) is used here, where the first cluster is activated at time . At the same time , the supervisor is also activated. The underlying assumption is that a starting signal activates both the first cluster of the recurrent network and the external input to the supervisor neurons at the same time. The read-out neurons are activated by the supervisor neurons. Due to the voltage-based plasticity, synapses are potentiated from neurons in the recurrent network and read-out neurons that fire at the same time. After the training period, the interneurons and supervisor neurons stop firing (Fig 3C). The target sequence is now stored in the read-out weight matrix (Fig 3D). As clusters in the recurrent network spontaneously reactivate during spontaneous activity in a sequential manner, they also reactivate the learned sequence in the read-out neurons. The spike sequence of the read-out neurons is a noisy version of the target signal (Fig 3E). Learning the same target signal several times results in slightly different read-out spike sequences each time (Fig S4). The firing rates of neurons in the read-out correspond to the target sequence (Fig 3F). In summary, our results show that the model is able to learn simple but non-trivial spike sequences.

Figure 3: Learning a non-Markovian sequence. (A.) Excitatory neurons in the recurrent network are all-to-all connected to the read-out neurons. The read-out neurons receive additional excitatory input from the supervisor neurons and inhibitory input from interneurons. The supervisor neurons receive spike trains that are drawn from a Poisson process with a rate determined by the target sequence. The read-out synapses are plastic under the voltage-based STDP rule. (B.) The rate of the input signal to the supervisor neurons , and . The supervisor sequence is where each letter represents a m external stimulation of k of the respective supervisor neuron. (C.) After seconds of learning, the supervisor input and plasticity are turned off. The read-out neurons are now solely driven by the recurrent network. (D.) The read-out weight matrix after learning. (E.) Under spontaneous activity, the spikes of recurrent network (top) and read-out (bottom) neurons. Excitatory neurons in the recurrent network reliably drive sequence replays. (F.) The target rate (above) and the rate of the read-out neurons (below) computed using a one sequence replay and normalized to . The spikes of the read-out neurons are convolved with a Gaussian kernel with a width of m.

Properties of the model

We then investigate the robustness of our model by silencing neurons in the recurrent network after learning. The first type of robustness we test is a robustness of the read-out. To study this, the presynaptic spike trains from the recurrent network are slightly perturbed. We compared two different networks. 1) A large network (where excitatory neurons are grouped in clusters) and 2) a small network (where excitatory neurons are grouped in clusters). In order to have flexibility, we did not simulate our recurrent network dynamics but simply forced neurons in cluster to spike randomly in time bin . As before, we learn the simple sequence in the readout neurons (Fig 4A). Note that the larger network learns quicker than the smaller one. The strengths of the read-out synapses of the smaller clusters need to be larger to be able to drive the read-out neurons. After learning, random neurons are silenced in each cluster at two different time points. While the effect of silencing these neurons is very small in the larger network, it is very visible in the smaller network. Multiple spikes disappear at each read-out neuron. These results show that, not surprisingly, larger clusters drive read-out neurons more robustly.

We then wanted to test whether time discretization is important in our model. To that end, we trained our recurrent network with clusters as small as one neuron. In that extreme case, the clusters and recurrent connections effectively disappear and our network becomes a synfire chain with a single neuron in every layer [Abeles, 1991]. Randomly removing a synapse in the network will break the sequential dynamics (Fig 4B). Although a spatiotemporal signal can be learned in the read-out neurons, the signal is not stable under a perturbation as the synfire-chain is not stable (Fig 4B). In summary, the choice of cluster size is a trade-off between network size on the one hand and robustness on the other hand. Large clusters require a large network but learn faster, are less prone to a failure of the sequential dynamics and drive the read-out neurons more robustly.

The sequential activity in the recurrent network shows some variability in terms of sequence duration. The neural activity does not move along the periodic trajectories with exactly the same speed in each reactivation. We wondered how the variance in our network compared to Weber’s law. According to Weber’s law, the standard deviation of reactions in a timing task grows linearly with time

[Gibbon, 1977, Hardy and Buonomano, 2018]. Since our model operates on a time scale that is behaviorally relevant, it is interesting to look at how the variability increases with increasing time. Time is discretized by clusters of neurons that activate each other sequentially, as such the variability increases naturally over time. As in a standard diffusion process, this increase is expected to grow with rather than linearly. This is indeed what our network displays (Fig 4C). Here, we increased the period of the recurrent network by increasing the network size (80 excitatory clusters of 80 neurons each, see Methods for details). By doing so, we can look at how the standard deviation of cluster activations grows with time.

Figure 4: Robustness and time variability of the model. (A.) The sequential dynamics of the large ( neurons, left) and small ( neurons, right) network are artificially elicited. In the large network, neurons are split in clusters of neurons, as before. In the small network, the neurons are split in clusters of neurons. The neurons of cluster fire in time bin , each time bin has a fixed duration of m. After learning the non-markovian sequence where each letter indicates a m stimulation, neurons are silenced. The large network learns for seconds and the small network learns for seconds. At m and at m, neurons are randomly silenced in each cluster. Top: spike raster of the excitatory neurons of the recurrent network. Bottom: spike raster of the read-out neurons. (B.) excitory neurons are connected in one large chain and driven by a low external input ( k). After seconds of learning, spontaneous dynamics is simulated. At m, a single synapse (the synapse connecting neuron number to neuron number ) is deleted. The dynamics breaks down and parts of the sequence are randomly activated by the external input. The learned read-out dynamics is fractured in subsequences as a consequence. Top: spike raster of the excitatory neurons of the recurrent network. Bottom: spike raster of the read-out neurons. The maximum synaptic read-out strength is increased, p, to be able to learn large read-out weights. (C.) Temporal variability in the sequential activity of the recurrent network is measured over trials. A histogram of the periods of the sequences (top) shows an average period of about  m and a standard deviation of around  m. The standard deviation increases roughly with the square root of time (bottom). The least squares fit for the function has a root mean squared error of m. is the mean time that a cluster is activated, is the standard deviation of the cluster activations and the coefficient of the fit, . A linear fit increases the root mean squared error to m. The lowest time constant in the plasticity of the read-out synapses is m.

Learning a complex sequence

In the non-markovian target sequence , the states have the same duration and the same amplitude (Fig 3B). We wanted to test whether we could learn some more complicated sequences. In this second task, the model is trained using a spatiotemporal signal that has components of varying durations and amplitudes. As an example, we use a m meadowlark song (Fig 5A). The spectrogram of the sound is normalized and used as the time- and amplitude- varying rate of the external Poisson input to the supervisor neurons. Each read-out and supervisor neuron encodes a different frequency range. The song is longer than the period of the original recurrent network (Fig 2). Therefore, we trained on the larger network of excitatory neurons (Fig 4C). After training, the read-out weight matrix reflects the structure of the target sequence (Fig 5B). Under spontaneous activity, the supervisor neurons and interneurons stop firing and the recurrent network drives song replays (Fig 5C). The learned spatiotemporal signal broadly follows the target sequence (Fig 5A). The model performs worse when the target dynamics has fast time-variations. This is because the time-variations are faster than or at the same order as the time discretization in the recurrent network. Thus, we conclude that the model can learn interesting spiking dynamics. The performance is qualitatively better when the changes in the target dynamics are slower than the time discretization in the recurrent network.

Figure 5: (A.) Target sequence (top). The amplitude shows the rate of the Poisson input to the supervisor neurons and is normalized between and k. Rate of read-out neurons for one sample reactivation after learning seconds (bottom). read-out neurons encode the different frequencies in the song. Neuron encodes a frequency interval of . (B.) The read-out weight matrix after learning seconds. (C.) Sequence replays showing the spike trains of both the recurrent network neurons (top, excitatory neurons in red and inhibitory neurons in blue), and the read-out neurons (bottom).

Discussion

We proposed an architecture that consists of two parts. The first part is a temporal backbone, implemented by a recurrent network. Excitatory neurons in the recurrent network are organized into disjoint clusters. These clusters are sequentially active by embedding a feedforward connectivity structure into the weights of the network. We showed that ongoing spontaneous activity does not degrade this structure. The set of plasticity rules and sequential dynamics reinforce each other. This was shown for slow-switching dynamics before [Litwin-Kumar and Doiron, 2014] (Fig S2). The stable sequential dynamics provides a downstream linear decoder with the possibility to read time out at the behavioural timescale.

The second part is a set of read-out neurons that encode multiple dimensions of a signal, which we also call "space". Connecting the first to the second part binds space to time. The read-out neurons learn spike sequences in a supervised manner. More specifically, we investigated a simple state transition sequence and a sequence that has a more complex dynamics. The supervisor sequence is encoded into the read-out weight matrix.

The presented model brings elements from different studies together. Recurrent network models that learn sequential dynamics are widely studied [Fiete et al., 2010, Rajan et al., 2016, Hardy and Buonomano, 2018]. Separate from this, models exist that aim to learn spatiotemporal dynamics [Brea et al., 2013, Nicola and Clopath, 2017, Gilra and Gerstner, 2017]. Combining the two, the dynamics of the recurrent network is exploited by the read-out neurons to perform the computational task. In this paper, we use local Hebbian learning such as the voltage-based STDP [Clopath et al., 2010].

Importantly, the dynamics of the recurrent network spans three time scales. First, individual neurons fire at the fastest milliseconds time scale. Second, one level slower, clusters of neurons fire at a time scale that determines the time discretisation of our temporal backbone, or if you will, the "tick of the clock", . Third, the slowest time scale is on the level of the entire network, i.e. the period of the sequential activity, . The time scales and are dependent on the cluster and network size, the average connection strengths within the clusters and adaptation. Smaller cluster sizes lead to a smaller and increases with network size when the cluster size is fixed.

The recurrent network is the "engine" that, once established, drives read-out dynamics. Our model can learn different read-out synapses in parallel and is robust to synapse failure. The robustness is a consequence of the clustered organization of the recurrent network. The clusters in the recurrent network provide a large drive for the read-out neurons while keeping the individual synaptic strengths reasonably small. Smaller clusters on the other hand require larger read-out synaptic strengths and the dynamics are as such more prone to small changes. In the limit, every cluster has exactly one neuron. The sequential dynamics is especially fragile in this case. Learning is faster with more neurons per cluster. Relatively small changes in the synapses are sufficient to learn the target. This is consistent with the intuitive idea that some redundancy in the network can lead to an increased learning speed [Raman et al., 2019].

Previous studies have discussed whether sequences are learned and executed serially or hierarchically [Lashley, 1951]. Our recurrent network has a serial organization. When the sequential activity breaks down halfway, the remaining clusters are not activated anymore. A hierarchical structure would avoid such complete breakdowns at the cost of more complicated structure to control the system. Sequences that are chunked in sub-sequences can be learned separately and chained together. When there are errors in early sub-sequences this will less likely affect the later sub-sequences. Evidence for hierarchical structures is found throughout the literature [Sakai et al., 2003, Glaze and Troyer, 2006, Okubo et al., 2015]. The basal ganglia is for example thought to play an important role in shaping and controlling action sequences [Tanji, 2001, Jin and Costa, 2015, Geddes et al., 2018]. Another reason why a hierarchical organization seems beneficial is inherent to the sequential dynamics. The time-variability of the sequential activity grows by approximately . While on a time scale of a few hundreds of milliseconds, this does not yet pose a problem, for longer target sequences this variability would reach and exceed the plasticity time constants. The presented model can serve as an elementary building block of a more complex hierarchy.

In summary, we have shown that a clustered network organization can be a powerful substrate for computations. Specifically, the model dissociates time and space and therefore can make use of Hebbian learning to learn spatiotemporal sequences. More general, the backbone as clustered organization might encode any variable and enable downstream read-out neurons to learn and compute any function of this variable, .

Methods

Excitatory neurons are modelled with the adaptive exponential integrate-and-fire model. A classical integrate-and-fire model is used for the inhibitory neurons. Excitatory to excitatory recurrent synapses are plastic under the voltage-based STDP rule [Clopath et al., 2010]. This enables the creation of neuronal clusters and a feedforward embedding. Normalization and weight bounds are used to introduce competition and keep the recurrent network stable. Synapses from inhibitory to excitatory neurons in the recurrent network are also plastic under a local plasticity rule [Vogels et al., 2011]. In general, it prevents runaway dynamics. The connections from the recurrent network to the read-out neurons are plastic under the same voltage-based STDP rule. However, potentiation of read-out synapses is linearly dependent on the strength of the synapses. There is no normalization here to allow a continuous weight distribution.

Network dynamics

Recurrent network

A network with excitatory () and inhibitory (

) neurons is homogeneously recurrently connected with connection probability

. The weights are initialized such that the network is balanced.

Read-out neurons

The excitatory neurons from the recurrent network are all-to-all connected to excitatory read-out () neurons. This weight matrix is denoted by . During learning, the read-out neurons receive supervisory input from excitatory supervisor () neurons. The connection from supervisor neurons to read-out neurons is one-to-one and fixed, . interneurons () are one-to-one and recurrently connected to the read-out neurons with fixed connection strengths, and (see table 1 for the recurrent network and read-out parameters). The to , to and the to connections are plastic.

Constant Value Description
Number of recurrent E neurons
Number of recurrent I neurons
Recurrent network connection probability
p Initial E to E synaptic strength
p E to I synaptic strength
p Initial I to E synaptic strength
p I to I synaptic strength
p Initial E to R synaptic strength
p S to R synaptic strength
p H to R synaptic strength
p R to H synaptic strength
Table 1: Initialization of network

Membrane potential dynamics

The membrane potential of the excitatory neurons () in the recurrent network has the following dynamics:

(1)

where is the membrane time constant, is the reversal potential, is the slope of the exponential, is the capacitance, are synaptic input from excitatory and inhibitory neurons respectively and are the excitatory and inhibitory reversal potentials respectively. When the membrane potential diverges and exceeds mV, the neuron fires a spike and the membrane potential is reset to . This reset potential is the same for all neurons in the model. There is an absolute refractory period of . The parameter is adaptive for excitatory neurons and set to after a spike, relaxing back to with a time constant :

(2)

The adaptation current for recurrent excitatory neurons follows:

(3)

where is the time constant for the adaptation current and is the slope. The adaptation current is increased with a constant when the neuron spikes. The membrane potential of the read-out () neurons has no adaptation current:

(4)

where , , , , and are as defined before. is the excitatory input from the recurrent network and supervisor neurons (supervisor input only non-zero during learning). is the excitatory input from the supervisor neuron (only non-zero during learning). is the inhibitory input from the interneuron (only non-zero during learning). The absolute refractory period is . The threshold follows the same dynamics as , with the same parameters. The membrane potential of the supervisor neurons () has no inhibitory input and no adaptation current:

(5)

where the constant parameters are defined as before and is the external excitatory input from the target sequence. The absolute refreactory period is . The threshold follows again the same dynamics as , with the same parameters. The membrane potential of the inhibitory neurons () in the recurrent network has the following dynamics:

(6)

where is the inhibitory membrane time constant, is the inhibitory reversal potential and are the excitatory and inhibitory resting potentials respectively. and are synaptic input from recurrent excitatory and inhibitory neurons respectively. Inhibitory neurons spike when the membrane potential crosses the threshold , which is non-adaptive. After this, there is an absolute refractory period of . There is no adaptation current. The membrane potential of the interneurons () follow the same dynamics and has the same parameters, but there is no inhibitory input:

(7)

where the excitatory input comes from both the read-out neuron it is attached to and external input. After the threshold is crossed, the interneuron spikes and an absolute refractory period of follows. The interneurons inhibit the read-out neurons stronger when they receive strong inputs from the read-out neurons. This slows the potentiation of the read-out synapses down and keeps the synapses from potentiating exponentially (see table 2 for the parameters of the membrane dynamics).

Constant Value Description
m E membrane potential time constant
m I membrane potential time constant
m Refractory period of E and I neurons
m R neurons refractory period
m S neurons refractory period
m H neurons refractory period
m excitatory reversal potential
m inhibitory reversal potential
m excitatory resting potential
m inhibitory resting potential
m Reset potential (for all neurons the same)
p Capacitance
m Exponential slope
m Adaptive threshold time constant
m Membrane potential threshold
m Adaptive threshold increase constant
m Adaptation current time constant
n Slope adaptation current of recurrent network neurons
p Adaptation current increase constant of recurrent network neurons
Table 2: Neuronal membrane dynamics parameters

Synaptic dynamics

The synaptic conductance of a neuron is time dependent, it is a convolution of a kernel with the total input to the neuron :

(8)

where and denote two different neuron types in the model (, , , or ). is the difference of exponentials kernel, , with a decay time and a rise time dependent only on whether the neuron is excitatory or inhibitory. There is no external inhibitory input to the supervisor and inter- neurons. During spontaneous activity, there is no external inhibitory input to the recurrent network and a fixed rate. The external input to the interneurons has a fixed rate during learning as well. The external input to the supervisor neurons is dependent on the specific learning task. There is no external input to the read-out neurons. The externally incoming spike trains are generated from a Poisson process with rates . The externally generated spike trains enter the network through synapses (see table 3 for the parameters of the synaptic dynamics).

Constant Value Description
m E decay time constant
m E rise time constant
m I rise time constant
m I rise time constant
p External input synaptic strength to E neurons
k Rate of external input to E neurons
p External input synaptic strength to I neurons
k Rate of external input to I neurons
p External input synaptic strength to S neurons
p External input synaptic strength to H neurons
k Rate of external input to H neurons
Table 3: Synaptic dynamics parameters

Plasticity

Excitatory plasticity

The voltage-based STDP rule is used [Clopath et al., 2010]. The synaptic weight from excitatory neuron to excitatory neuron is changed according to the following differential equation:

(9)

and are the amplitude of depression and potentiation respectively. and are the voltage thresholds to recruit depression and potentiation respectively, as denotes the Heaviside function. is the postsynaptic membrane potential, and are low-pass filtered versions of , with respectively time constants and :

(10)
(11)

is the presynaptic spike train and is the low-pass filtered version of with time constant :

(12)

where the time constant is dependent on whether learning happens inside ( to ) or outside ( to ) the recurrent network. if neuron spikes at time and zero otherwise. Competition between synapses is created in the recurrent network. This is done by a hard normalization every m, keeping the sum of all weights onto a neuron constant: . to weights have a lower and upper bound . The minimum and maximum strengths are important parameters and determine the position of the dominant eigenvalues of . Potentiation of the read-out synapses is weight dependent. Assuming that stronger synapses are harder to potentiate [Debanne et al., 1999], reduces linearly with :

(13)

The maximum LTP amplitude is reached when , p m (see table 4 for the parameters of the excitatory plasticity rule).

Constant Value Description
p m LTD amplitude
p m LTP amplitude
m LTD threshold
m LTP threshold
m Time constant of low pass filtered membrane potential (LTD)
m Time constant of low pass filtered membrane potential (LTP)
m Time constant of low pass filtered spike train in recurrent network (LTP)
m Time constant of low pass filtered spike train for read-out synapses (LTP)
p Minimum E to E weight
p Maximum E to E weight
p Minimum E to R weight
p Maximum E to R weight
Table 4: Excitatory plasticity parameters

Inhibitory plasticity

Inhibitory plasticity acts as a homeostatic mechanism, preventing runaway dynamics [Rhodes-Morrison et al., 2007, Vogels et al., 2011, Litwin-Kumar and Doiron, 2014, Zenke et al., 2015]. It is not necesseary for the stability of the sequential dynamics, but it is necessary for slow-switching dynamics (Fig S2). Excitatory neurons that fire with a higher frequency will receive more inhibition. The to weights are changed when the presynaptic inhibitory neuron or the postsynaptic excitatory neuron fires [Vogels et al., 2011]:

(14)
(15)

where is a constant target rate for the postsynaptic excitatory neuron. and are the spike trains of the postsynaptic and presynaptic neuron respectively. The spike trains are low pass filtered with time constant to obtain and (as in equation 12). Table 5 shows parameter values for the inhibitory plasticity rule. The to synapses have a lower and upper bound .

Constant Value Description
Amplitude of inhibitory plasticity
Target firing rate
m Time constant of low pass filtered spike train
p Minimum I to E weight
p Maximum I to E weight
Table 5: Inhibitory plasticity parameters

Learning protocol

Learning happens in two stages. First a feedforward structure is embedded in the recurrent network that produces reliable sequential dynamics. Once this dynamics is established connections to read-out neurons can be learned. Read-out neurons are not interconnected and can learn in parallel.

Recurrent network

The network is divided in disjoint clusters of neurons. The clusters are sequentially stimulated for a time duration of minutes by a large external current where externally incoming spikes are drawn from a Poisson process with rate k. This high input rate does not originate from a single external neuron but rather assumes a large external input population. Each cluster is stimulated for m and in between cluster stimulations there are m gaps. During excitatory stimulation of a cluster, all other clusters receive an external inhibitory input with rate k and external input weight p. There is a periodic boundary condition. The last cluster activates the first cluster again. After the sequential stimulation, the network is spontaneously active for minutes. The connectivity stabilizes during the spontaneous dynamics. The recurrent weight matrix of the large network (Fig 5) is learned using the same protocol. The recurrent weight matrix reaches a stable structure after three hours of sequential stimulation followed by three hours of spontaneous dynamics. Parameters that change are summarized in table 6. For slow-switching dynamics, a similar protocol is followed, with minor changes (Fig S2). The weight matrix used to plot the spectrum of the recurrent network in Figs 2 and S3 is .

Constant Value Description
Number of recurrent E neurons
Number of recurrent I neurons
p baseline E to E synaptic strength
p E to I synaptic strength
p Initial I to E synaptic strength
p I to I synaptic strength
p Minimum E to E weight
p Maximum E to E weight
p Minimum I to E weight
p Maximum I to E weight
p Maximum E to R weight
Table 6: Large recurrent network changed parameters

Read-out network

During learning of the read-out synapses, external input drives the supervisor and inter- neurons. The rate of the external Poisson input to the supervisor neurons reflects the sequence that has to be learned. The rate is normalized between k and k. During learning, changes. After learning, the external input to the supervisor and inter- neurons is turned off and both stop firing. The read-out neurons are now solely driven by the recurrent network. Plasticity is frozen in the read-out synapses after learning. With plasticity on during spontaneous dynamics, the read-out synapses would continue to potentiate because of the coactivation of clusters in the recurrent network and read-out neurons. This would lead to read-out synapses that are all saturated at the upper weight bound.

Simulations

The code used for the training and simulation of the recurrent network is build on top of the code from [Litwin-Kumar and Doiron, 2014] in Julia. The code used for learning spatiotemporal sequences using read-out neurons is written in Matlab. Recurrent networks learned in Julia can be saved and loaded into the Matlab code, or networks with a manually set connectivity can be explored as in [Schaub et al., 2015]. Forward Euler discretization with a time step of m is used. The code is available for reviewer on ModelDB and access code: XYZ and password XYZ. The code will be made public on ModelDB after publication.

Acknowledgements

References

  • [Abbott et al., 2016] Abbott, L. F., DePasquale, B., and Memmesheimer, R. M. (2016). Building functional networks of spiking model neurons. Nature Neuroscience, 19(3):350–355.
  • [Abeles, 1991] Abeles, M. (1991). Corticonics: Neural circuits of the cerebral cortex. Cambridge University Press.
  • [Adler et al., 2019] Adler, A., Zhao, R., Shin, M. E., Yasuda, R., and Gan, W.-B. (2019). Somatostatin-Expressing Interneurons Enable and Maintain Learning-Dependent Sequential Activation of Pyramidal Neurons. Neuron, 102:1–15.
  • [Billeh and Schaub, 2018] Billeh, Y. N. and Schaub, M. T. (2018). Feedforward architectures driven by inhibitory interactions. Journal of Computational Neuroscience, 44(1):63–74.
  • [Brea et al., 2013] Brea, J., Senn, W., and Pfister, J.-P. (2013).

    Matching Recall and Storage in Sequence Learning with Spiking Neural Networks.

    Journal of Neuroscience, 33(23):9565–9575.
  • [Chenkov et al., 2017] Chenkov, N., Sprekeler, H., and Kempter, R. (2017). Memory replay in balanced recurrent networks. PLoS Computational Biology, 13(1):e1005359.
  • [Clopath et al., 2010] Clopath, C., Büsing, L., Vasilaki, E., and Gerstner, W. (2010). Connectivity reflects coding: A model of voltage-based STDP with homeostasis. Nature Neuroscience, 13(3):344–352.
  • [Debanne et al., 1999] Debanne, D., Gähwiler, B. H., and Thompson, S. M. (1999). Heterogeneity of Synaptic Plasticity at Unitary CA3–CA1 and CA3–CA3 Connections in Rat Hippocampal Slice Cultures. The Journal of Neuroscience, 19(24):10664–10671.
  • [Fiete et al., 2010] Fiete, I. R., Senn, W., Wang, C. Z., and Hahnloser, R. H. (2010). Spike-Time-Dependent Plasticity and Heterosynaptic Competition Organize Networks to Produce Long Scale-Free Sequences of Neural Activity. Neuron, 65(4):563–576.
  • [Geddes et al., 2018] Geddes, C. E., Li, H., and Jin, X. (2018). Optogenetic Editing Reveals the Hierarchical Organization of Learned Action Sequences. Cell, 174(1):32–43.e15.
  • [Gibbon, 1977] Gibbon, J. (1977). Scalar expectancy theory and Weber’s law in animal timing. Psychological Review, 84(3):279–325.
  • [Gilra and Gerstner, 2017] Gilra, A. and Gerstner, W. (2017). Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network. Elife, 6(e28295):1–38.
  • [Glaze and Troyer, 2006] Glaze, C. M. and Troyer, T. W. (2006). Temporal Structure in Zebra Finch Song: Implications for Motor Coding. Journal of Neuroscience, 26(3):991–1005.
  • [Hahnloser et al., 2002] Hahnloser, R. H., Kozhevnikov, A. A., and Fee, M. S. (2002). An ultra-sparse code underlies the generation of neural sequences in a songbird. Nature, 419(6902):65–70.
  • [Hardy and Buonomano, 2018] Hardy, N. and Buonomano, D. (2018).

    Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    Neural Computation, 30(2):378–396.
  • [Harvey et al., 2012] Harvey, C. D., Coen, P., and Tank, D. W. (2012). Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature, 484(7392):62–68.
  • [Itskov et al., 2011] Itskov, V., Curto, C., Pastalkova, E., and Buzsaki, G. (2011). Cell Assembly Sequences Arising from Spike Threshold Adaptation Keep Track of Time in the Hippocampus. Journal of Neuroscience, 31(8):2828–2834.
  • [Jaeger et al., 2007] Jaeger, H., Maass, W., and Principe, J. (2007). Special issue on echo state networks and liquid state machines. Neural Networks, 20(3):287–289.
  • [Jin and Costa, 2015] Jin, X. and Costa, R. M. (2015). Shaping action sequences in basal ganglia circuits. Current Opinion in Neurobiology, 33:188–196.
  • [Jun et al., 2017] Jun, J. J., Steinmetz, N. A., Siegle, J. H., Denman, D. J., Bauza, M., Barbarits, B., Lee, A. K., Anastassiou, C. A., Andrei, A., Aydin, Ç., Barbic, M., Blanche, T. J., Bonin, V., Couto, J., Dutta, B., Gratiy, S. L., Gutnisky, D. A., Häusser, M., Karsh, B., Ledochowitsch, P., Lopez, C. M., Mitelut, C., Musa, S., Okun, M., Pachitariu, M., Putzeys, J., Rich, P. D., Rossant, C., Sun, W. L., Svoboda, K., Carandini, M., Harris, K. D., Koch, C., O’Keefe, J., and Harris, T. D. (2017). Fully integrated silicon probes for high-density recording of neural activity. Nature, 551(7679):232–236.
  • [Ko et al., 2013] Ko, H., Cossell, L., Baragli, C., Antolik, J., Clopath, C., Hofer, S. B., and Mrsic-Flogel, T. D. (2013). The emergence of functional microcircuits in visual cortex. Nature, 496(7443):96–100.
  • [Laje and Buonomano, 2013] Laje, R. and Buonomano, D. V. (2013). Robust timing and motor patterns by taming chaos in recurrent neural networks. Nature neuroscience, 16(7):925–33.
  • [Lashley, 1951] Lashley, K. S. (1951). The Problem of Serial Order in Behavior. Cerebral Mechanisms in Behavior, 21:112–146.
  • [Leonardo and Fee, 2005] Leonardo, A. and Fee, M. S. (2005). Ensemble Coding of Vocal Control in Birdsong. Journal of Neuroscience, 25(3):652–661.
  • [Litwin-Kumar and Doiron, 2014] Litwin-Kumar, A. and Doiron, B. (2014). Formation and maintenance of neuronal assemblies through synaptic plasticity. Nature Communications, 5(May):1–12.
  • [Maass, 2016] Maass, W. (2016). Searching for principles of brain computation. Current Opinion in Behavioral Sciences, 11:81–92.
  • [Nicola and Clopath, 2017] Nicola, W. and Clopath, C. (2017). Supervised learning in spiking neural networks with FORCE training. Nature Communications, 8(1):1–15.
  • [Okubo et al., 2015] Okubo, T. S., Mackevicius, E. L., Payne, H. L., Lynch, G. F., and Fee, M. S. (2015). Growth and splitting of neural sequences in songbird vocal development. Nature, 528(7582):352–357.
  • [Pastalkova et al., 2008] Pastalkova, E., Itskov, V., Amarasingham, A., and Buzsáki, G. (2008). Internally generated cell assembly sequences in the rat hippocampus. Science, 321(5894):1322–1327.
  • [Peters et al., 2014] Peters, A. J., Chen, S. X., and Komiyama, T. (2014). Emergence of reproducible spatiotemporal activity during motor learning. Nature, 510(7504):263–267.
  • [Rajan et al., 2016] Rajan, K., Harvey, C. D., and Tank, D. W. (2016). Recurrent Network Models of Sequence Generation and Memory. Neuron, 90(1):128–142.
  • [Raman et al., 2019] Raman, D. V., Rotondo, A. P., and O’Leary, T. (2019). Fundamental bounds on learning performance in neural circuits. In Proceedings of the National Academy of Sciences, page 201813416.
  • [Rhodes et al., 2004] Rhodes, B. J., Bullock, D., Verwey, W. B., Averbeck, B. B., and Page, M. P. (2004). Learning and production of movement sequences: Behavioral, neurophysiological, and modeling perspectives. Human Movement Science, 23(5):699–746.
  • [Rhodes-Morrison et al., 2007] Rhodes-Morrison, A., Aertsen, A., and Diesmann, M. (2007). Spike-timing dependent plasticity in balanced random networks. Neural Computation, 19:1437–1467.
  • [Sakai et al., 2003] Sakai, K., Kitaguchi, K., and Hikosaka, O. (2003). Chunking during human visuomotor sequence learning. Experimental Brain Research, 152(2):229–242.
  • [Schaub et al., 2015] Schaub, M. T., Billeh, Y. N., Anastassiou, C. A., Koch, C., and Barahona, M. (2015). Emergence of Slow-Switching Assemblies in Structured Neuronal Networks. PLoS Computational Biology, 11(7):1–28.
  • [Setareh et al., 2018] Setareh, H., Deger, M., and Gerstner, W. (2018). Excitable neuronal assemblies with adaptation as a building block of brain circuits for velocity-controlled signal propagation. PLoS Computational Biology, 14(7):e1006216.
  • [Sussillo and Abbott, 2009] Sussillo, D. and Abbott, L. F. (2009). Generating Coherent Patterns of Activity from Chaotic Neural Networks. Neuron, 63(4):544–557.
  • [Tanji, 2001] Tanji, J. (2001). Sequential Organization of Multiple Movements : Involvement of Cortical Motor Areas. Annual Review of Neuroscience, 24:631–651.
  • [Tully et al., 2016] Tully, P. J., Lindén, H., Hennig, M. H., and Lansner, A. (2016). Spike-Based Bayesian-Hebbian Learning of Temporal Sequences. PLoS Computational Biology, 12(5).
  • [Vogels et al., 2011] Vogels, T. P., Sprekeler, H., Zenke, F., Clopath, C., and Gerstner, W. (2011). Inhibitory Plasticity Balances Excitation and Inhibition in Sensory Pathways and Memory Networks. Science, 334(December 2011):1569–1573.
  • [Zenke et al., 2015] Zenke, F., Agnes, E. J., and Gerstner, W. (2015). Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nature Communications, 6:1–13.

Supporting information

Linear rate model: analysis of the spectrum

A linear rate model can give insight into the dynamics of a large nonlinear structured spiking network [Schaub et al., 2015]. Sequential dynamics in a linear rate network can emerge from the same feedforward structure as studied in this paper. The dynamics of a simplified rate model follows:

(16)

where is a multidimensional variable consisting of the rates of all excitatory and inhibitory groups, is the weight matrix and is noise. A minimal model for sequential firing consists of three groups of excitatory neurons and one group of inhibitory neurons (Fig S1). We are interested in a general connectivity structure:

(17)

where , for sequential dynamics and for balance. The Schur decomposition

gives both the eigenvalues and an orthonormal basis for the eigenvectors:

(18)
(19)

where the basis for the complex conjugate eigenvalues is a plane and can be rotated (for ). The first mode, , decays fast and uniformly over the different neuronal groups. The second mode,

, decays slower and indicates the interplay between excitatory and inhibitory groups. Mode three and four span the eigenspace for the complex conjugate pair,

. This space is localized on the three excitatory groups. It shows that an increase of activity in one group is coupled with decreased activities in other groups. The real part should be smaller than one for linear stability and closer to one means a slower decay of this mode. Importantly, the imaginary part is linearly dependent on the amount of feedforward structure, (Fig S1A). This leads to an oscillation and essentially determines the frequency of sequential switching.

Figure S1: (A.) Cartoon of simple linear model, the cyclic connections are stronger clockwise than anticlockwise. (B.) Eigenvalue spectrum shows a conjugate complex eigenvalue pair. The imaginary part scales linearly with .
Figure S2: Slow-switching dynamics. The recurrent network is stimulated with external input that is spatially clustered, but temporally uncorrelated. Each cluster is stimulated for m, with m gaps in between stimulations. The rate of external stimulation is k during training and there is no simultaneous inhibition of other clusters. This is repeated for minutes after which the network stabilizes during minutes of spontaneous activity. (A.) A diagonal structure is embedded in the recurrent network. Since there are no temporal correlations in the external input, there is no off-diagonal structure. (B.) The spectrum shows an eigenvalue gap. This indicates the emergence of a slower time scale. The leading eigenvalues do not have an imaginary part, pointing at the absence of feedforward structure and thus there is no sequential dynamics. (C.) Under a regime of spontaneous dynamics (i.e. uncorrelated Poisson inputs), the clusters are randomly reactivated. The high adaptation means a neuron fires only once during each reactivation.
Figure S3: The feedforward embedding is stable. After minutes of training (i.e. sequential stimulation as described in the methods), the network stabilizes during spontaneous activity. During the first minutes of spontaneous dynamics, the connectivity still changes. More specifically, the imaginary parts of the leading eigenvalues increase. This leads to a higher switching frequency and as such a smaller period in the sequential activity. After around minutes, a fixed point is reached. The first row shows the spectra of the weight matrix at different times. The second row shows the spike trains at those times, for one second of spontaneous activity.
Figure S4: The sequence is relearned four times for seconds each. Before relearning, the read-out weight matrix was always reset. When active, read-out neurons fire two spikes on average one spike. This variability is a consequence of the noisy learning process.
Figure S5: The dynamics of the recurrent network is forced to overlap. excitatory neurons are grouped in clusters of neurons. Whereas in the trained networks each cluster is discretely active, here their activity overlaps. Neurons in cluster can fire in time bin and in time bin . Learning with such overlapping dynamics is still possible. (A.) The non-markovian sequence is learned where each letter corresponds to a m stimulation of the corresponding supervisor neuron. (B.) The read-out weight matrix after learning for seconds. (C.) The forced dynamics in the recurrent network drives spiking activity in the read-out neurons. The zoom of activity in the recurrent network shows the overlapping dynamics.