Models of cortical neuronal circuits commonly depend on inhibitory feedback to control gain, provide signal normalization, and to selectively amplify signals using winner-take-all (WTA) dynamics. Such models generally assume that excitatory and inhibitory neurons are able to interact easily, because their axons and dendrites are co-localized in the same small volume. However, quantitative neuroanatomical studies of the dimensions of axonal and dendritic trees of neurons in the neocortex show that this co-localization assumption is not valid. In this paper we describe a simple modification to the WTA circuit design that permits the effects of distributed inhibitory neurons to be coupled through synchronization, and so allows a single WTA to be distributed widely in cortical space, well beyond the arborization of any single inhibitory neuron, and even across different cortical areas. We prove by non-linear contraction analysis, and demonstrate by simulation that distributed WTA sub-systems combined by such inhibitory synchrony are inherently stable. We show analytically that synchronization is substantially faster than winner selection. This circuit mechanism allows networks of independent WTAs to fully or partially compete with other.
Many models of neuronal computation involve the interaction of a population of excitatory neurons whose outputs drive inhibitory neuron(s), which in turn provide global negative feedback to the excitatory pool Amari Arbib (1977); Douglas . (1995); Hahnloser . (2000); Yuille Geiger (2003); Maass (2000); Hertz . (1991); Rabinovich . (2000); Rutishauser . (2011); Coultrip . (1992). Practical implementation of such circuits in biological neural circuits depend on co-localization of the excitatory and inhibitory neurons, an assumption which studies of the extents of axonal and dendritic trees of neurons in the neocortex show is not valid Katzel . (2011); Binzegger . (2004); Shepherd . (2005); Douglas Martin (2004). Firstly, a substantial fraction of the axonal arborization of a typical excitatory ’spiny’ pyramidal neuron extends well beyond the range of the arborization of a typical ’smooth’ inhibitory neuron, particularly in the populous superficial layers of the neocortex Yabuta (1998); Binzegger . (2004). This spatial arrangement means that excitatory effects can propagate well outside the range of the negative feedback provided by a single inhibitory neuron. Secondly, the horizontally disposed ’basket’ type of inhibitory neuron, which is a prime candidate for performing normalization, makes multiple synaptic contacts with its excitatory targets, so that even within the range of its axonal arborization, not all the members of an excitatory population can be covered by its effect. This connection pattern means that excitatory neurons within some local population must either be partitioned functionally, or multiple smooth cells must co-operate to cover the entire population of excitatory cells.
In previous publications we have shown how winner-take-all (WTA) circuits composed of a small population of excitatory neurons and a single inhibitory neuron can be combined to construct super-circuits that exhibit finite state-machine (FSM) like behavior Rutishauser Douglas (2009); Neftci . (2010). The super-circuits made use of sparse excitatory cross-connections between WTA modules to express the required states of the FSM. These excitatory connections can extend well outside of the range of the local WTA connections, and so are consistent with the observed long-range lateral excitatory connections referred to above. On the other hand, we have not previously confronted the question of whether the WTA is necessarily localized to the extent of the smooth-cell arborization, or whether the WTA can itself be well distributed in space within or between cortical lamina, or even between cortical areas. In this paper we describe a simple modification to the WTA circuit design that couples the effects of distributed inhibitory neurons through synchronization, and so permits a WTA to be widely distributed in cortical space, well beyond the range of the axonal arborization of any single inhibitory neuron, and even across cortical areas. We also demonstrate that such a distributed WTA is inherently stable in its operation.
We have considered a number of circuits that could be used to distribute spatially the WTA behavior (Fig 1). However, we will describe and analyze only the circuit shown in Fig. 2, which we consider to be the most elegant of the distributive mechanisms (notice the similarity to Fig. 1B). The key insight is the following: Under normal operating conditions, all the participating distributed inhibitory neurons should receive the same summed excitatory input. We achieve this by interposing an excitatory neuron in the negative feedback loop from the excitatory population to its local inhibitory neuron. Instead of the local inhibitory neuron summing over its excitatory population, the interposed neuron performs the summing and passes its result to the inhibitory neuron. This result is also copied to the more distant inhibitory neurons in the spatially distributed WTA. In this way the inhibitory neuron of each sub-WTA sums over the projections from the interposed excitatory neurons of all other sub-WTAs, including its own one. Thus, each inhibitory neuron is able to provide feedback inhibition to its local sub-WTA that is proportional to the total excitation provided by all excitatory neurons participating in the entire distributed WTA. We will show that functionally this amounts to a form of synchrony between all the inhibitory units.
3.1 Connectivity and dynamics - single WTA
All the circuits of Fig 1 can achieve a distributed WTA by merging several independent WTAs, but we consider the circuit shown in Fig 2B to be most feasible, and so our analysis will focus on this one. However, similar reasoning could be applied to all the variants shown. Note that our chosen circuit is similar to that of Fig 1B, but has a more realistic connectivity pattern in that the summed excitatory activity is only projected onto a single unit, which requires less wiring specificity than Fig 1B.
The dynamics of a single WTA (Fig 2A) with in total units, consisting of excitatory units, one inhibitory unit and one intermediary interconnect excitatory unit , are
Each excitatory unit receives recurrent input from itself () and its neighbors (, see Fig 2A). For simplicity, only self-recurrence is considered here ( and ), but very similar arguments obtain when recurrence from neighboring units is included. Using the weight matrix the dynamics of this system is described as
where the order of units is (i.e. first column and row is , last column/row is
). The firing rate activation functionis a non-saturating rectification non-linearity . We assume and throughout unless mentioned otherwise.
is a vector of the constant activation thresholds.
3.2 Connectivity and dynamics - coupled WTA
Two identical single WTAs, each described by weight matrices can be combined into one distributed WTA that acts functionally as a single WTA by adding a recurrent excitatory feedback loop between the two WTAs (Fig 2B). The weight matrix of the merged system is
The dynamics of this system are as shown in Eq 2 using .
3.3 Stability analysis
The stability analysis, using non-linear contraction analysis (see Appendix) Lohmiller Slotine (1998, 2000); Slotine (2003); W. Wang Slotine (2005), consists of three steps: i) demonstrate contraction of a single WTA, ii) merge two WTAs by demonstrating that inhibitory units synchronize, and iii) demonstrate contraction of the combined WTAs. We have previously shown how contraction analysis can be applied to reasoning over the stability and functionality of WTA circuits Rutishauser . (2011). Here, we apply and extend the same methods to this new circuit.
Contraction analysis is based on the Jacobians of the system. For purposes of analysis, but without loss of generality, we will base this section on a reduced system with only 1 possible winner for each WTA as shown in Fig 2C.
The Jacobian of a single system is
In a stable network, a constant external input to the first unit will lead to a constant amplitude of that is either an amplified or suppressed version of its input.
The activation function is not continuously differentiable, but it is continuous in both space and time, so that contraction results can still be applied directly Lohmiller Slotine (2000). Furthermore, the activation function is piecewise linear with a derivative of either or . We exploit this property by inserting dummy terms , which can either be or according to the derivative of : . In this case, all units are active and thus .
A system with Jacobian is contracting if
where is a constant transformation into an appropriate metric and is the generalized Jacobian. If is negative definite, the system is said to be contracting. We have shown previous Rutishauser . (2011) (section 2.4) how to choose the constant transformation and conditions that guarantee contraction for a WTA circuit where all excitatory units provide direct input to the inhibitory unit (Fig 1A). In summary, where is defined based on the eigendecomposition . In this case
guarantee contraction for any such WTA of any size Rutishauser . (2011).
Structurally, the two versions of the WTA are equivalent in that an additional unit was added in the pathway of recurrent inhibition, but no inhibition is added or removed (Compare Fig 1A to Fig 2A). Thus, we can apply the same constraints by replacing the product with in above equations. This product is equivalent to the inhibitory loop gain. This reduction is verified as follows. Using the notation shown in Fig 2C, assume that for where for the other units. Then,
At steady-state, , showing that and can be merged into a single unit by providing input of weight directly to unit (Fig 2D). The first key result of this paper are the following limits for contraction of a single such WTA (Fig 2A):
3.3.1 Synchronizing two WTAs
Next, we show that connecting two WTAs in the manner illustrated in Fig 2C results in synchronization of the two inhibitory units, which in turn leads to the two WTAs merging into a single WTA. Note that by synchronization, we mean that two variables have the same trajectory, or more generally that their difference is constant (in contrast to other meanings of synchronization i.e. in population coding). The approach is to show that adding excitatory connections of sufficient strength will lead to the activity of the two inhibitory units and approaching a constant difference.
The Jacobian of the coupled system as shown in Fig 2C is
with (see Eq 6) and
where defines an invariant subset of the system such that is constant, with . Here, we define synchrony as a regime where the differences between the inhibitory units and between the interconnect units are constant (although not necessarily zero). This results in
which embeds the two conditions.
Condition (13) is satisfied if
The conditions on the interconnect-weight guarantees that the dynamics are stable and that the inhibitory units synchronize. As long as is sufficiently small but non-zero, the inhibitory parts of the system will synchronize. Realistically, needs to be sufficiently large to drive the other inhibitory neuron above threshold and will thus be a function of the threshold (see Rutishauser . (2011), Eq 2.51). Here, synchrony is defined as their difference being constant. This in turn shows that the two WTAs have merged into a single WTA since the definition of a WTA is that each excitatory unit receives an equivalent amount of inhibition (during convergence but not necessarily afterwards, see simulations). This is our second key result.
3.3.2 Stability of pairwise combinations of WTAs
The final step of the stability analysis are conditions for the coupling strength such that the coupled system as shown in Fig 2C is contracting. The reasoning in this section assumes that the individual subsystems are contracting (as shown above).
The Jacobian of the combined system remains Eq 11, where are the Jacobians of the individual systems and are the coupling terms. Rewriting the Jacobian of the second subsystem by variable permutation and allows expression of the system in the form of
where (Eq 12). This transformation222The variable permutation is equivalent to a transformation of by the permutation matrix : with . of is functionally equivalent to the original system (thus, its contraction limits remain) but it allows expression of the connection between the systems in the symmetric form of Eq 17. This functionally equivalent system can now be analyzed using the approach that follows.
A matrix of the form is negative definite if the individual systems are negative definite and if Horn (1985) (Page 472). Following Slotine (2003) (Section 3.4) and Rutishauser . (2011) (Section 2.8), this implies that a sufficient condition for contraction is where
is the largest singular value ofand equivalent to in our case (all other elements of are zero) and is the contraction rate of the individual subsystems. Since the two subsystems are equivalent, the contraction rates are also the same . It thus follows that the coupled systems are stable if .
3.3.3 Summary of boundary conditions
In summary, the following conditions guarantee stability of both the single and combined system as well as hard competition between the two coupled systems (that is, only one of the WTAs can have a winner). These conditions can be relaxed if , which will permit a soft winner-take-all.
The lower bound on is from the synchronization analysis, whereas the upper bound is from the stability analysis. These results illustrate the critical tradeoff between having enough strength to ensure functionality, while being weak enough to exclude instability.
3.4 Speed of winner selection
How quickly will a system select a winner? For a single WTA, this question is answered by how quickly a single system contracts toward a winner and for a coupled system how quickly the two systems synchronize. One of the key advantages of contraction analysis is that the rate of contraction, and in this case the rate of synchronization, can be calculated explicitly. We will express the contraction and synchronization rate in terms of the time constants and its inverse, the decay constant. refers to the mean lifetime of exponential decay . is the decay constant. Both the contraction and the synchronization rate are expressed in the form of a decay constant . For example, the contraction rate of a system of the form is equivalent to .
Physiologically, the time constants in our system are experimentally determined membrane time constants that are typically in the range of 10-50ms Koch (1998); McCormick . (1985); Koch . (1996); Brown . (1981). For simplicity, we assume that all excitatory and inhibitory units have the same, but different, time constants and , respectively. While the exact values depend on the cell type and state of the neurons, it is generally the case that due to the smaller cell bodies of inhibitory neurons McCormick . (1985); Koch (1998).
The bounds (19) were calculated assuming equal time constants for all units. However, the same calculations yield to very similar bounds when assuming different time constants for inhibitory and excitatory units (Appendix C, Rutishauser . (2011)). In this case the ratio of the time constants becomes an additional parameter for the parameter bounds.
3.4.1 Speed of synchronization
The synchronization rate is equivalent to the contraction rate of the system defined in Eq 13 Pham Slotine (2007), which is the absolute value of the maximal eigenvalue of the Hermitian part of . Here, the original is replaced by the diagonal matrix , with the appropriate terms on the diagonal 333For the example of (Eq 11), the diagonal terms are . The matrix is an orthonormal version of as defined in Eq 14, which here is simply .
The resulting synchronization rate (sync rate) is a function of the weights (local inhibitory loop) and (remote inhibitory loop). We assume , which means that remote connectivity is weaker than local connectivity. However, qualitatively similar results can be found using the opposite assumption. For , the sync rate is
Note the tradeoff between local and remote connectivity: stronger remote connectivity increases and stronger local connectivity decreases the speed of synchronization (the larger , the quicker the system synchronizes). For approximately equal connectivity strength or in general for , the sync rate is approximately .
In general for , the sync rate is . For example, for this reduces to . Again, for , the sync rate is approximately . In conclusion, the sync rate is thus approximately equal to the contraction rate of the inhibitory units. Thus, synchronization occurs very quickly (20-50ms for typical membrane time constants).
3.4.2 Speed of contraction
The speed of selecting a winner (the contraction rate) for a single WTA can similarly be calculated based on the absolute value of the maximal eigenvalue of the Hermitian part of (7).
Assuming , the contraction rate is
Note that the larger , the longer it takes till the system converges. Qualitatively similar findings result for other ratios of and . For a typical value of (see simulations below) and , the contraction rate would be . This equals a half-way time (time constant) of . For , this would increase to . The time it takes to find a winner is thus a multiple of the membrane time constant (in this example ) and substantially slower than the time it takes to synchronize the network. In conclusion, synchronization is achieved first which is then followed by winner selection.
3.5 Coupling more than two WTAs
So far we have shown how two different WTAs compete with each other after their inhibitory neurons are coupled. Similarly, more than two WTAs can compete with each other by all-to-all coupling of the inhibitory units, i.e. every WTA is connected with two connections from and to every other WTA. Thus, the wiring complexity of this system scales as where is the number of WTAs in the system (note that is not the number of units but the number of WTAs). Notice also that the all-to-all coupling concerns only the sparse long-range excitatory connections and not the internal connectivity of the WTAs them-self.
The same principle can be used to embed hierarchies or sequences of competition. Thus, in a network of such WTAs, some WTAs could be in direct competition with each other while others are not. Thus, for example, in a network of three WTAs A, B, and C relationships such as A competes with B and B competes with C are possible. In this case A does not directly compete with C. So if A has a winner, C can also have a winner. If B has a winner, however, neither B nor C can have a winner (see Fig 4D-F for a demonstration).
Regardless of how many WTAs are combined and whether all compete with all or more selectively, the stability of the aggregated system is guaranteed if the individual sub-systems are stable and the coupling strengths observe the derived bounds. While in themselves combinations of stable modules have no reason to be stable, certain combinations (such as the one we utilize) of contracting systems are guaranteed to be stable Slotine Lohmiller (2001). This is a key benefit of Contraction Analysis for the analysis of neural circuits.
3.6 Numerical simulations
We simulated several cases of the network to illustrate its qualitative behavior. We used Euler integration with . The analytically derived bounds offer a wide range of parameters for which stability as well as function is guaranteed. For the simulations, we chose parameters that verify all bounds discussed.
First, we explored a simple system consisting of two WTAs with two possible winners each (Fig 3). Parameters were and . We found that any of the four possible winners can compete with each other irrespective of whether they reside on the first or second WTA (Fig 3B-D shows an example). The inhibitory units quickly synchronized (Fig 3C) their activity and reached the same steady-state amplitude (because )444If , it can be verified that for all if the initial values at are equal. Thus, the two inhibitory neurons become exactly equivalent in this special case..
Second, we simulated a system with 3 WTAs using the same parameters (Fig 4). For all-to-all coupling, all 3 WTAs directly compete with each other (Fig 4A,B), i.e. there can only be one winner across the entire system. Again, the inhibitory units all synchronize quickly during and after convergence (Fig 4C). We also simulated the same system with more selective connectivity, eliminating competition between WTAs 1 and 3 (Fig 4D). This arrangement allows either one winner if it is on WTA 2, or two winners if they are on WTAs 1 and 3. If the maximal activity is not on WTA 2, then the network permits 2 winning states. Otherwise, if the maximal input is on WTA 2 only 1 winner is permitted (see Fig 4E for an illustration). This configuration allows for partial competition.
Neural circuits commonly depend on negative feedback loops. Such recurrent inhibition is a crucial element of microcircuits from a wide range of species and brain structures Shepherd Grillner (2010) and enables populations of neurons to compute non-linear operations such as competition, decision making, gain control, filtering, and normalization. However, when considering biologically realistic versions of such circuits additional factors such as wiring length, specificity and complexity become pertinent. Here, we are principally concerned with the superficial layers of neocortex where the average distance of intracortical inhibitory connections is typically much shorter than the excitatory connections Bock . (2011); Binzegger . (2004); Perin . (2011); Katzel . (2011); Adesnik Scanziani (2010). In contrast, in invertebrates an inhibitory neuron has been identified that receives input from and projects back to all Kenyon cells (which are excitatory) Papadopoulou . (2011). This neuron has been demonstrated to perform response normalization, making this system a direct experimental demonstration of competition through shared inhibition. No such system has yet been identified in the cortex.
The number of excitatory neurons that can be contacted by an inhibitory neuron thus poses a limit on how many excitatory neurons can compete directly with one another (in terms of numbers and distance). Other models, such as those based on Mexican-hat type inhibitory surrounds Hertz . (1991); Willshaw Malsburg (1976); Soltani Koch (2010), even require that inhibitory connectivity be longer range than the excitatory. These anatomical constraints have been used to argue that models such as the WTA are biologically unrealistic and as such of limited use.
We have demonstrated here, by theoretical analysis and simulation, that it is possible to extend such circuits by merging several independent circuits functionally, through synchronization of their inhibitory interneurons. This extension allows the construction of large, spatially distributed circuits that are composed of small pools of excitatory units that share an inhibitory neuron. We have applied and proved by non-linear contraction analysis that systems combined in this manner are inherently stable and that arbitrary aggregation by inhibitory synchrony of such sub-systems results in a stable system. This composition of subcircuits removes the limits on maximal circuit size imposed by anatomical wiring constraints on inhibitory connectivity, because the synchrony between local inhibitory neurons is achieved entirely by excitatory connectivity which can possibly be long-range so permitting competition between excitatory units that are separated by long distances; for example, in different cortical areas. We show that the time necessary to achieve sychronization is much shorter than the time required to select a winner. Thus, synchronization is faster than winner selection, which can thus proceed robustly across long-range connections that enforce synchronization. Further, selective synchronization between some WTAs but not others allows partial competition between some but not other WTAs (see Fig 4). The strength of these long-range connections could be modulated dynamically to enable/disable various competitions between two populations conditional on some other brain state. This modulation could be implemented by a state-dependent routing mechanism Rutishauser Douglas (2009).
There are several possibilities of mapping the abstract units in our model to real physiological neurons. Our units are mean-rate approximations of a small group of neurons. In terms of intra-cortical inhibition, these would lie anatomically close to each other within superficial layers of neocortex. Since such inhibitory connectivity would have only limited reach, each inhibitory subunit can only enforce competition across a limited number of closeby excitatory units. Competition between different areas is made possible by synchronizing remote populations by long-range excitatory mechanisms in the way we propose. Direct long-range inhibition, on the other hand, is unlikely both intracortically and subcortically, since all known connections from the thalamus and basal ganglia to cortex are excitatory. Networks such as the LEGION network D. Wang Terman (1995) assume global inhibitory input to all excitatory units in the network, which for the reasons we discuss is unlikely in the case of cortex. It would, however, be possible to implement a feasible version of the global inhibitory input by synchronizing many local inhibitory neurons using the mechanism we describe, resulting in an anatomically realistic version of the LEGION network.
Functionally, the model presented here makes several testable predictions. Consider a sensory area with clearly defined features as possible winners, such as orientations. The model predicts that the inhibitory units would not be tuned to these features, particularly if the number of possible winners is large. This is because the connectivity to the inhibitory units is not feature specific. Experimental studies indicate that this is indeed the case: units that functionally represent different tuning project to the same inhibitory unit, resulting in untuned inhibitory activity Bock . (2011); Fino Yuste (2011); Kerlin . (2010); Kuhlman . (2011); Hofer . (2011). Secondly, this model predicts that inhibitory activity between two different areas or parts of the same area can either be highly synchronous or completely decoupled depending on whether at present the two are competing or functioning independently. This thus predicts that synchrony of inhibitory units should be affected by manipulations that manipulate competition, such as top-down attention.
Our model suggests that synchronized populations of inhibitory neurons are crucial for enforcing competition across several subpopulations of excitatory neurons. It further suggests that the larger the number and spatial distribution of such synchronized inhibitory units, the larger the number of units that compete with each other. Experimentally, synchronized modulation of inhibitory neurons is a common phenomena that is believed to generate the prominent gamma rhythm triggered by sensory stimulation in many areas Fries . (2007); Whittington . (1995); Traub . (1996). Recent experiments have utilized stimulation of inhibitory neurons Cardin . (2009); Sohal . (2009); Szucs . (2009) to increase or decrease their synchronization with direct observable effects on nearby excitatory neurons such as, for example, increased or decreased amplitude and precision of evoked responses relative to how strongly the inhibitory neurons were synchronizing. Note that our proposal for this function of inhibitory synchrony is distinct and independent from the proposal that gamma-band synchrony serves to increase readout efficacy by making spikes arrive co-incidentally from a large number of distributed sources Tiesinga . (2008); Singer Gray (1995). Here, we propose that an additional function of such synchrony is to allow select populations of excitatory neurons to compete with each other because they each receive inhibition at the same time.
5 Appendix: Contraction Analysis
This section provides a short summary of contraction analysis. We have previously published the detailed methods of applying contraction theory to WTA circuits Rutishauser . (2011). Essentially, a nonlinear time-varying dynamic system will be called contracting if arbitrary initial conditions or temporary disturbances are forgotten exponentially fast, i.e., if trajectories of the perturbed system return to their unperturbed behavior with an exponential convergence rate. A relatively simple algebraic conditions can be given for this stability-like property to be verified, and this property is preserved through basic system combinations and aggregations.
global exponential convergence and stability are guaranteed
convergence rates can be explicitly computed as eigenvalues of well-defined Hermitian matrices
combinations and aggregations of contracting systems are also contracting
robustness to variations in dynamics can be easily quantified
Before stating the main contraction theorem, recall first the following properties: The symmetric part of a matrix is . A complex square matrix is Hermitian if , where denotes matrix transposition and complex conjugation. The Hermitian part of any complex square matrix is the Hermitian matrix . All eigenvalues of a Hermitian matrix are real numbers. A Hermitian matrix is said to be positive definite if all its eigenvalues are strictly positive. This condition implies in turn that for any non-zero real or complex vector , . A Hermitian matrix is called negative definite if is positive definite.
A Hermitian matrix dependent on state or time will be called uniformly positive definite if there exists a strictly positive constant such that for all states and all the eigenvalues of remain larger than that constant. A similar definition holds for uniform negative definiteness.
Consider now a general dynamical system in ,
with a smooth non-linear function. The central result of Contraction Analysis, derived in Lohmiller Slotine (1998) in both real and complex forms, can be stated as:
Theorem Denote by the Jacobian matrix of with respect to . Assume that there exists a complex square matrix such that the Hermitian matrix is uniformly positive definite, and the Hermitian part of the matrix
is uniformly negative definite. Then, all system trajectories converge exponentially to a single trajectory, with convergence rate . The system is said to be contracting, is called its generalized Jacobian, and its contraction metric. The contraction rate is the absolute value of the largest eigenvalue (closest to zero, although still negative) .
In the linear time-invariant case, a system is globally contracting if and only if it is strictly stable, and can be chosen as a normal Jordan form of the system, with a real matrix defining the coordinate transformation to that form Lohmiller Slotine (1998). Alternatively, if the system is diagonalizable, can be chosen as the diagonal form of the system, with a complex matrix diagonalizing the system. In that case, is a diagonal matrix composed of the real parts of the eigenvalues of the original system matrix.
- Adesnik Scanziani (2010) AdesnikScanziani10Adesnik, H. Scanziani, M. 2010. Lateral competition for cortical space by layer-specific horizontal circuitsLateral competition for cortical space by layer-specific horizontal circuits. Nature4641155–1160.
- Amari Arbib (1977) Amari77bAmari, S. Arbib, M. 1977. Competition and cooperation in neural netsCompetition and cooperation in neural nets. J. Metzler (), Systems NeuroscienceSystems neuroscience ( 119-165). San Diego, CAAcademic Press.
- Binzegger . (2004) Binzegger04Binzegger, T., Douglas, RJ. Martin, KA. 2004. A quantitative map of the circuit of cat primary visual cortexA quantitative map of the circuit of cat primary visual cortex. J Neurosci24398441-53.
- Bock . (2011) BockReid11Bock, DD., Lee, WC., Kerlin, AM., Andermann, ML., Hood, G., Wetzel, AW. . 2011. Network anatomy and in vivo physiology of visual cortical neuronsNetwork anatomy and in vivo physiology of visual cortical neurons. Nature471177–182.
- Brown . (1981) Brown81Brown, TH., Fricke, RA. Perkel, DH. 1981. Passive electrical constants in three classes of hippocampal neuronsPassive electrical constants in three classes of hippocampal neurons. Journal of Neurophysiology4661360.
- Cardin . (2009) Cardin09Cardin, JA., Carlen, M., Meletis, K., Knoblich, U., Zhang, F., Deisseroth, K. . 2009. Driving fast-spiking cells induces gamma rhythm and controls sensory responsesDriving fast-spiking cells induces gamma rhythm and controls sensory responses. Nature459663–667.
- Coultrip . (1992) Coultrip92Coultrip, R., Granger, R. Lynch, G. 1992. A cortical model of winner-take-all competition via lateral inhibitionA cortical model of winner-take-all competition via lateral inhibition. Neural Networks547-54.
- Douglas . (1995) Douglas95Douglas, R., Koch, C., Mahowald, M., Martin, K. Suarez, H. 1995. Recurrent excitation in neocortical circuitsRecurrent excitation in neocortical circuits. Science2695226981-5.
- Douglas Martin (2004) Douglas2004_neuronalDouglas, R. Martin, K. 2004. Neuronal circuits of the neocortex.Neuronal circuits of the neocortex. Annu Rev Neurosci27419–451.
- Fino Yuste (2011) FinoYuste11Fino, F. Yuste, R. 2011. Dense Inhibitory Connectivity in NeocortexDense inhibitory connectivity in neocortex. Neuron691188-1203.
- Fries . (2007) Fries07Fries, P., Nikoli, D. Singer, W. 2007. The gamma cycleThe gamma cycle. Trends Neurosci.30309–316.
- Hahnloser . (2000) Hahnloser00Hahnloser, R., Sarpeshkar, R., Mahowald, M., Douglas, R. Seung, HS. 2000. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuitDigital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature4056789947-51.
- Hertz . (1991) Hertz91Hertz, J., Krogh, A. Palmer, R. 1991. Introduction to the Theory of Neural ComputationIntroduction to the theory of neural computation. Redwood City,CAAddison-Wesley.
- Hofer . (2011) Hofer11Hofer, SB., Ko, H., Pichler, B., Vogelstein, J., Ros, H., Zeng, H. . 2011. Differential connectivity and response dynamics of excitatory and inhibitory neurons in visual cortexDifferential connectivity and response dynamics of excitatory and inhibitory neurons in visual cortex. Nature Neuroscience141045-1052.
- Horn (1985) Horn85Horn, R. 1985. Matrix analysisMatrix analysis. Cambridge University Press.
- Katzel . (2011) KatzelMiesenbock11Katzel, D., Zemelman, BV., Buetfering, C., Wolfel, M. Miesenbock, G. 2011. The columnar and laminar organization of inhibitory connections to neocortical excitatory cellsThe columnar and laminar organization of inhibitory connections to neocortical excitatory cells. Nature Neuroscience14100–107.
- Kerlin . (2010) Kerlin10Kerlin, AM., Andermann, ML., Berezovskii, VK. Reid, RC. 2010. Broadly tuned response properties of diverse inhibitory neuron subtypes in mouse visual cortex.Broadly tuned response properties of diverse inhibitory neuron subtypes in mouse visual cortex. Neuron675858-71.
- Koch (1998) Koch1998Koch, C. 1998. Biophysics of Computation: Information Processing in Single Neurons (Computational Neuroscience)Biophysics of Computation: Information Processing in Single Neurons (Computational Neuroscience). Oxford University Press.
- Koch . (1996) Koch96Koch, C., Rapp, M. Segev, I. 1996. A brief history of time (constants).A brief history of time (constants). Cerebral Cortex6293–101.
- Kuhlman . (2011) Kuhlman11Kuhlman, SJ., Tring, E. Trachtenberg, JT. 2011. Fast-spiking interneurons have an initial orientation bias that is lost with vision.Fast-spiking interneurons have an initial orientation bias that is lost with vision. Nat Neurosci141121-1123.
- Lohmiller Slotine (1998) LohSlo98Lohmiller, W. Slotine, J. 1998. On Contraction Analysis for Nonlinear SystemsOn contraction analysis for nonlinear systems. Automatica346.
- Lohmiller Slotine (2000) Lohmiller2000Lohmiller, W. Slotine, J. 2000. Nonlinear process control using contraction theoryNonlinear process control using contraction theory. AIChE Journal463588-596.
- Maass (2000) Maass00Maass, W. 2000. On the computational power of winner-take-allOn the computational power of winner-take-all. Neural Computation122519-2536.
- McCormick . (1985) McCormickEtal1985McCormick, DA., Connors, BW., Lighthall, JW. Prince, DA. 1985. Comparative electrophysiology of pyramidal and sparsely spiny stellate neurons of the neocortexComparative electrophysiology of pyramidal and sparsely spiny stellate neurons of the neocortex. Journal of Neurophysiology544782–806.
- Neftci . (2010) Neftci_etal10Neftci, E., Chicca, E., Indiveri, G., Cook, M. Douglas, RJ. 2010. State-Dependent Sensory Processing in Networks of VLSI Spiking NeuronsState-dependent sensory processing in networks of vlsi spiking neurons. IEEE International Symposium on Circuits and Systems, ISCAS 2010.IEEE international symposium on circuits and systems, ISCAS 2010.
- Papadopoulou . (2011) Papadopoulou11Papadopoulou, M., Cassenaer, S., Nowotny, T. Laurent, G. 2011. Normalization for sparse encoding of odors by a wide-field interneuronNormalization for sparse encoding of odors by a wide-field interneuron. Science332721–725.
- Perin . (2011) PerinMarkram11Perin, R., Berger, TK. Markram, H. 2011. A synaptic organizing principle for cortical neuronal groupsA synaptic organizing principle for cortical neuronal groups. Proc. Natl. Acad. Sci. U.S.A.1085419–5424.
- Pham Slotine (2007) PhamSlotine07Pham, Q. Slotine, J. 2007. Stable concurrent synchronization in dynamic system networksStable concurrent synchronization in dynamic system networks. Neural Networks20162-77.
- Rabinovich . (2000) Rabinovich00Rabinovich, MI., Huerta, R., Volkovskii, A., Abarbanel, HDI., Stopfer, M. Laurent, G. 2000. Dynamical coding of sensory information with competitive networksDynamical coding of sensory information with competitive networks. Journal of Physiology-Paris945-6465 - 471.
- Rutishauser Douglas (2009) RutishauserDouglas2009Rutishauser, U. Douglas, R. 2009. State-Dependent Computation Using Coupled recurrent networksState-dependent computation using coupled recurrent networks. Neural Computation212.
- Rutishauser . (2011) RutishauserDouglas2010Rutishauser, U., Douglas, R. Slotine, J. 2011. Collective stability of networks of winner-take-all circuitsCollective stability of networks of winner-take-all circuits. Neural computation233.
- Shepherd Grillner (2010) shepherd2010handbookShepherd, GM. Grillner, S. 2010. Handbook of brain microcircuitsHandbook of brain microcircuits. Oxford University Press.
- Shepherd . (2005) Shepherd05Shepherd, GM., Stepanyants, A., Bureau, I., Chklovskii, D. Svoboda, K. 2005. Geometric and functional organization of cortical circuitsGeometric and functional organization of cortical circuits. Nat Neurosci86782-90.
- Singer Gray (1995) SingerGray95Singer, W. Gray, CM. 1995. Visual feature integration and the temporal correlation hypothesisVisual feature integration and the temporal correlation hypothesis. Annu. Rev. Neurosci.18555–586.
- Slotine (2003) Slotine03Slotine, J. 2003. Modular stability tools for distributed computation and controlModular stability tools for distributed computation and control. International Journal of Adaptive Control and Signal Processing17397-416.
- Slotine Lohmiller (2001) Slotine01Slotine, J. Lohmiller, W. 2001. Modularity, evolution, and the binding problem: a view from stability theoryModularity, evolution, and the binding problem: a view from stability theory. Neural Networks14137-145.
- Sohal . (2009) Sohal09Sohal, VS., Zhang, F., Yizhar, O. Deisseroth, K. 2009. Parvalbumin neurons and gamma rhythms enhance cortical circuit performanceParvalbumin neurons and gamma rhythms enhance cortical circuit performance. Nature459698–702.
- Soltani Koch (2010) SoltaniKoch2010Soltani, A. Koch, C. 2010. Visual saliency computations: mechanisms, constraints, and the effect of feedbackVisual saliency computations: mechanisms, constraints, and the effect of feedback. Journal of Neuroscience2212831-43.
- Szucs . (2009) Szucs09Szucs, A., Huerta, R., Rabinovich, MI. Selverston, AI. 2009. Robust microcircuit synchronization by inhibitory connectionsRobust microcircuit synchronization by inhibitory connections. Neuron61439–453.
- Tiesinga . (2008) Tiesinga08Tiesinga, P., Fellous, JM. Sejnowski, TJ. 2008. Regulation of spike timing in visual cortical circuitsRegulation of spike timing in visual cortical circuits. Nat. Rev. Neurosci.997–107.
- Traub . (1996) Traub96Traub, RD., Whittington, MA., Stanford, IM. Jefferys, JG. 1996. A mechanism for generation of long-range synchronous fast oscillations in the cortexA mechanism for generation of long-range synchronous fast oscillations in the cortex. Nature383621–624.
- D. Wang Terman (1995) WangTerman95Wang, D. Terman, D. 1995. Locally excitatory globally inhibitory oscillator networksLocally excitatory globally inhibitory oscillator networks. IEEE Transactions on Neural Networks61283-286.
- W. Wang Slotine (2005) WangSloWang, W. Slotine, J. 2005. On Partial Contraction Analysis for Coupled Nonlinear OscillatorsOn partial contraction analysis for coupled nonlinear oscillators. Biological Cybernetics911.
- Whittington . (1995) Whittington95Whittington, MA., Traub, RD. Jefferys, JG. 1995. Synchronized oscillations in interneuron networks driven by metabotropic glutamate receptor activationSynchronized oscillations in interneuron networks driven by metabotropic glutamate receptor activation. Nature373612–615.
- Willshaw Malsburg (1976) WillshawMalsburg76Willshaw, DJ. Malsburg, C. von der. 1976. How patterned neural connections can be set up by self-organizationHow patterned neural connections can be set up by self-organization. Proc. R. Soc. Lond., B, Biol. Sci.194431–445.
- Yabuta (1998) YabutaCallaway98Yabuta, NaCE. 1998. Cytochrome-oxidase blobs and intrinsic horizontal connections of layer 2/3 pyramidal neurons in primate V1Cytochrome-oxidase blobs and intrinsic horizontal connections of layer 2/3 pyramidal neurons in primate v1. Vis Neuroscience151007-1027.
Yuille Geiger (2003)
YuilleGeiger03Yuille, A. Geiger, D.
M. Arbib (), The Handbook of Brain Theory and Neural NetworksThe handbook of brain theory and neural networks ( 1228-1231).MIT Press.