Memory loss or Echo State Property
In dynamical systems theory, memory loss appears in the context of two diametrically opposite reasons. When systems have sensitive dependence on initial conditions like in chaotic systems, small errors multiply within a short time gap, so that it is unfeasible to track a specific trajectory (e.g., [5, 15]). On the other hand, the diameter of the state space could asymptotically decrease to zero so that the trajectories tend to coalesce into a single trajectory. In this paper, we consider the latter case of memory loss when a dynamical system is driven exogenously.
A feature of memory loss in driven systems is that a single bounded trajectory emerges to be a representative of the drive/input. To illustrate this idea, we consider a continuoustime driven system, in particular a scalar differential equation, , where . The solution of the differential equation with an initial condition can be shown to be , where . For a given , regardless of . Hence, in this case, or in cases similar to this (for a schematic see Fig. 2), for a fixed initial choice , setting the starting time further back in time would get get closer to for a given . It follows that different solutions asymptotically forget or lose the memory of their initial condition, and a single bounded attractive solution emerges as a “proxy” for the drive (in the state space). This is the essence of memory loss w.r.t. to the input . In general, such memory loss is not observed by all driven systems, and neither is it observed generically. Moreover, such a memory loss phenomenon usually depends on the exogenous input and also the equations governing the system. In cases, where there is no memoryloss w.r.t. to input, not a single trajectory like , but a setvalued function comprising multiple or a bunch of trajectories (solutions) would attract nearby solutions.
The idea of relating memory loss to the presence of a single proxy corresponding to an input can be extended to the case of a discretetime dynamical driven system (representative work is in [21]). In the RC literature, the idea of memory loss or fading memory popularly called as the echo state property (ESP) was treated as a stability property of the reservoir or network per se. In such a case, ESP guarantees that the entire past history of the input determines the state of the system precisely, i.e., there is no possibility of two or more reachable states at any given point in time [13] if the entire past values of the input have influenced the dynamics. The formal definition linking ESP intrinsically to an input is available in [21]. Here, we extend the results made in [21]
and consider a general setup to analyze the effect of both an input and a design parameter. A design parameter (not necessarily a scalar and includes the case of a vector or a matrix)
, an input value , a statevariable in a compact state space and a continuous function that takes values in would constitute a parametric driven system (we denote this system by throughout, and other entities silently understood). The dynamics of a parametric driven system is rendered through a temporal input via the equation . Given an and , any sequence in the state space that satisfies the equation for all integers is an image of in the space (and also called a solution of ). Having fixed and suppose we collect the component of all images , and denote it by , then we have a collection of sets which we call it the representation of . It turns out the components of this representation satisfy (see [21, Proposition 2] and [23] for details). Following [21] , we say that the driven system has echostate property (ESP) with respect to (w.r.t.)̇ an input at if it has exactly one solution or one image in the space or (equivalently) if for each , is a singleton of . The emergence of a single image or a proxy, in general, does not necessarily mean it captures the “information” content of the input. For instance, consider the discretetime system, , where and with . It may be verified that for every input , the map is a contraction on for each , and it easily follows that there is exactly one solution in (by applying Lemma 2 in [21]). However, every solution in converges monotonically to as . Such a convergence means that any image of would not reflect upon the temporal complexity of , and nor would there be any good separability of the representation of the inputs in the space . Such a driven system is not useful for applications. On that account, in RC, one employs driven dynamical systems with a higher dimensional state space.Representation and Encoding maps
Given an and and a driven system , the set , a component of the representation can equivalently be defined in two other useful ways. To illustrate the first one, given , denote its leftinfinite subset . If the entire leftinfinite sequence were to be fed to the system, let the the set of all reachable states at time be denote by and call it the encoding of and . The set turns out to be the nested intersection of the sets , and so on (see Fig. 3 for a schematic description). It turns out that is identically the same as the set . The second interpretation of or is that the is the limit of the sequence of sets (defined using the Hausdorff metric on the space of subsets of ; see [23] for details). Thus, from the point of computer simulations, given an input segment of and a large number of initial conditions , suppose the driven system is fed the input on the system states to get new states , and then fed on the states to get the new states and so on, then represents the simulated approximation of , and also an approximation of itself.
Our approach is to understand the influence of and separately on the dynamics in the space , and in particular on the encoding . To do this, we vary keeping constant, i.e., study the setvalued function , which we call it the inputencoding map (given ). Next, we vary keeping a constant, i.e., study , which we call it the parameterencoding map (given ). While we study the entire representations in these two cases, i.e., when we study for a given and for a given , we call them the inputrepresentation and parameterrepresentation maps respectively.
The analysis and simulations to follow would be much simpler if the single set would determine the ESP rather than inferring it from the representation maps. This is possible when we can restrict to an openparametric driven system. A parametric driven system is an openparametric driven system if the mapping does not map any set with a nonempty interior (e.g. [25]) to a singleton of . An example of an openparametric driven system is a recurrent neural network(RNN) of the form , where is (the nonlinear activation) performed componentwise on ,
, a realvalued parameter would correspond to the scaling of the reservoir (invertible) matrix
of dimension and is a matrix with input connections and (the cartesian product of copies of ). It can be readily verified that in the case of an openparametric driven system, has ESP w.r.t at , if and only if , i.e., is a singleton of . Note that the parameter does not have to be a scalar always and it can even represent either of the two matrices or in the RNN.Neuromorphic computing schemes do not express unbounded dynamics. This boundedness is normally due to a nonlinearity such as a saturation function like that subdues or squashes the effect of largeamplitude inputs or, in other words, produces contractions on subsets of the space when an input value is of large amplitude. In general, we characterize the ability to obtain such contracting dynamics, by the accurate terminology of contractibility. Given an openparametric driven system , and an , if there exists an input such that has ESP w.r.t. at , then we say is contractible at . The RNN is contractible at every value of as long as the matrix does not have rows with only zeroes (formal proof in [21, (ii) of Theorem 2]). This condition on
ensures that when the input is of large amplitude, the states of the system are made to coalesce due to the squashing of the activation functions to obtain ESP. Note that
is contractible at does not mean that is a contraction mapping, but only talks of the existence of some input that could potentially drive the system into a subdomain where contracting dynamics takes place.Continuity of the encoding and representation maps
In a practical application, heuristically speaking, it is desirable that the reachable system states should change by a small amount if the change in input is small (as in Fig.
1). When this heuristic idea holds, it translates to the mathematical notion of continuity of the inputencoding or representation map that helps us present Theorem 1. We first present both the mathematical and intuitive idea of continuity (Fig. 4) of the encoding maps from the setvalued functions pointofview. For a formal treatment, see [23].Continuity of setvalued functions [3] can be defined with the notion of open neighborhoods like a regular function. A setvalued function is continuous at a point if it is both uppersemicontinuous and lowersemicontinuous at a point. A setvalued function is uppersemicontinuous (u.s.c) at if every open set containing there is a neighborhood of so that is contained in ; is lowersemicontinuous at (l.s.c) if every open set that intersects there is a neighborhood of so that also has nonempty intersection with (see Fig. 4 and for a formal definition in [23]). If is u.s.c but not l.s.c at then it has an explosion at , and if is l.s.c but not u.s.c at then it has an implosion at . The function is called continuous or u.s.c or l.s.c if it is respectively continuous, u.s.c and l.s.c at all points in its domain.
A property that we use in this paper is that if is u.s.c at and is singlevalued, it is continuous at . In addition to the few cases illustrated in Fig. 4, we point out that the “graphs” of setvalued maps could behave wildly. One such case that we would quote later is a map that is u.s.c but not continuous on any open set. For example, the setvalued function defined by
(1) 
is upper semicontinuous everywhere but continuous only when is irrational and at .
Theorem 1 (proof in [23]) states that undesirable responses to input can be avoided or robustness to input can be obtained if and only the reachable system states is a single value, i.e., when ESP holds.
Theorem 1.
(Continuity of the inputrepresentation map and the inputencoding map) An openparametric driven system that is contractible at has ESP w.r.t. at the inputrepresentation map is continuous at the inputencoding map is continuous at .
Details on how the above theorem would not be true if the contractibility hypothesis is removed are given in [23]. Further, when the inputencoding map is discontinuous at , it can be shown that it is u.s.c but not l.s.c at which means that there is an explosion in the set (see [23, Lemma 2]). A mathematically pedantic reader may be interested to note that at a point of discontinuity, the inputencoding map behaves wildly similar to the setvalued function in equation (1) at its point of discontinuity.
It may not be possible to simulate the inputencoding map or the inputrepresentation maps by fixing and varying the inputs since each input lies in an infinitedimensional space. However, to practically verify robustness of to a given input at a parameter value , one can infer if is a singleton of (or not robust if it multivalued) by numerically observing if is clustering towards a single point. A quantity to measure the deviation from a single pointcluster [23, Eqn. 7] could be used.
It is also undesirable to observe large changes in the system responses for small changes in the parameter in an application. Theorem 2 (proof in [23]), although not as assertive as Theorem 1, states that ESP implies the continuity of the parameterencoding map. The contractibility condition is not essential. The converse of Theorem 2 is not true (see examples in [23]) and is addressed later. Note that the theorem is applicable to even systems that do not satisfy the contractibility condition.
Theorem 2.
(Continuity of the parameterencoding and the parameterrepresentation map) Fix an and hence . If an openparametric driven system has ESP w.r.t. at then the parameterencoding map is continuous at . More generally, if has ESP w.r.t. at then the parameterrepresentation map is continuous at .
It is possible to simulate the parameterencoding map when is a scalar at discrete steps to identify ESP. When ESP is satisfied, i.e., when is a singleton of for some , numerically would contain a single highly clustered set of points, and not when it has multiplevalues of . The result of identifying such a distinction can be made even with a small number of initial conditions. Fig. 5 shows two coordinates of the parameterencoding map evaluated at discrete set of points for two different RNNs. We consider with neurons and with neurons, a randomly generated input of length , a input matrix and a reservoir matrix (chosen randomly, but with unit spectral radius). For the plot on the left in Fig. 5, we use 1000 initial conditions chosen on the boundary of (to exploit the fact that is an open mapping [25] and hence preserves the boundary). For the plot on the right, there were only chosen samples in . Clearly, in both plots of Fig. 5, the clustering towards a singlepoint for smaller values of is obvious. Hence, one does not need to be worried about simulating the parameterencoding map accurately if one’s task is only to identify if has a single pointcluster. On the other hand, if one’s task is to identify instabilities due to the parameter while there is no ESP, one needs to analyze the parameterencoding map further.
Instability due to parameter, edgeofcriticality and the ESP threshold
If the parameter encoding map is multivalued at an (i.e., there is no ESP), but is continuous at then small fluctuations around would not be a practical concern although the concern due to a failure of input robustness in view of Theorem 1 remains. If one were to identify the continuities or the discontinuities of the parameterencoding map through its finitetime approximations , the problem of identifying the parameters that cause instability would be solved. This is since in practice it is only the continuous finitetime approximations that can be computed. Theorem 3 (proof in [23]) helps us with this task using the notion of equicontinuity for a family of functions. In essence, equicontinuity means we could describe the continuity of the entire family at once (formal definition in [25, 23]). To explain our idea, we consider the family of functions defined by , on that is not equicontinuous at ; the limit of this sequence of functions that is not continuous at the point (see [23]). Suppose we take equally spaced points in the interval and compute , for large , we observe that as a function of turns distinctively positive at , and is precisely the point where the function is not equicontinuous in the interval . In general, when one has a monotonic sequence of continuous functions like , continuity of at can be verified numerically by verifying if is close to zero – this works since and evaluated in some neighborhood of would be close by. On the other hand, the discontinuity of at can be verified numerically by verifying the function to be distinctly positive since differs conspicuously from the values of in some neighborhood of due to the nonequicontinuity of the family at . Now since , they form a monotonically decreasing sequence of sets, and when is distinctly positive at , we could identify to be a point of discontinuity of the parameterencoding map at . We can confirm this idea works since the discontinuities in the parameterencoding map arise only due to the failure of the equicontinuity in as delineated in Theorem 3. More generally, Theorem 3 addresses the converse of Theorem 2.
Suppose is a scalar parameter in an interval , given an openparametric driven system and an input , we define to be the ESP threshold if has ESP w.r.t. at every while it does not have ESP w.r.t. at every . In other words, is an ESP threshold if is a singleton of for and has multiple values of for . Note that, without loss of generality, we are sticking to an assumption that the parameterencoding map loses stability whenever the parameter is above a threshold. The analysis to follow holds even if parameterencoding map loses stability below a threshold, instead.
Suppose is the ESP threshold in an interval , and the parameterencoding map is also continuous at , we would call to be a softESP threshold, and call a hardESP threshold when parameterencoding map is discontinuous at (examples are in [23]). If is a softESP threshold, there would be a seamless transition of the parameterencoding map from being singlevalued to being multivalued (as in (a) of Fig. 4) while crosses the softESP threshold and hence there would be no instabilities (discontinuous changes) in the system response due to fluctuations in . We conjecture that a softESP threshold is observed in very specific cases when the input is either a constant or periodic or in the limit turns to be a constant or periodic (see [23]). If is a hardESP threshold, there would be a discontinuous change, and we call to be the input specific edgeofcriticality, a folklore term used to describe a sudden onset of “chaotic” behavior in complex systems.
Theorem 3.
Consider an openparametricdriven system and an input . The parameterencoding map is continuous at if and only if is equicontinuous at . In particular, when is a realparameter in the interval and is the ESP threshold, then is a softESP threshold when is equicontinuous at and a hardthreshold otherwise.
In the literature, different informationtheoretic measures have been used to quantitatively demonstrate that the performance of RNNs or, in general, many other complex systems in which performance or computation is enhanced when they operate close to the edgeofcriticality (e.g., [4, 18]). Note that there is no clear mathematical definition of the edgeofcriticality in such literature. There are also heuristic methods to approximate the edgeofcriticality again without a clear definition of the edgeofcriticality. Here, we have mathematically identified the input specific edgeofcriticality in an interval of parameters as the smallest value of for which equicontinuity fails. Nonequicontinuity is essential for sensitive dependence on initial conditions, an ingredient of chaos [1].
A stepwise procedure to numerically identify the discontinuous points of the parameterencoding map and the edgeofcriticality or hardESP threshold for a given (when the parameter is a scalar) is presented next (Fig. 6, that is discussed later employs this procedure):

Fix equally distant points in an interval ;

With initial conditions of the reservoir and an input of length , simulate; .

Find , where in practice between two finite sets and with identical cardinality can be computed by
; here denotes the minimum of each row in a matrix , and the element of is the norm between the th element of and th element of , i.e., (any norm could be employed in finite dimensions), and is the transpose of ; 
Obtain the parameterstability plot, i.e., against and identify the smallest where the plot of turns conspicuously positive as the edgeofcriticality or the hardESP threshold in the interval , and other where the plot is conspicuously positive as the points of discontinuities of the parameterencoding map. If the plot has only constant behavior close to , then the parameterencoding map is continuous everywhere.
Numerical Results
Here we use the parameterstability plot to find the hardESP threshold of an RNN. Upper bounds on the attributes of the reservoir matrix (like spectral radius) to ensure ESP for all possible inputs, i.e., ESP w.r.t. to the entire space are available (e.g., [28]). More commonly, the criterion of the unit spectral radius of the matrix is currently being used with the intuition that ESP holds w.r.t. all possible inputs. The disadvantage of these bounds or criteria is that they are not dependent on the input and particularly on the input’s temporal properties. In a practical application, often, only a class of inputs with specific templates are employed for a given task for the network. Hence the bounds are suboptimal on two accounts: (i) temporal relation in the input is ignored (ii) the gap between say the actual edgeofcriticality and the bound (like the unity of spectral radius of the reservoir) is unknown.
With the data used in generating the plot (on the right) in Fig. 5 and noting that the reservoir matrix has a unit spectra radius, we obtain the parameterstability plot shown in Fig. 6. The smallest for which the plot turns conspicuously positive can be identified as the hardESP threshold. Wherever the parameterencoding map is discontinuous, it fails to be lowersemicontinuous, and hence it has wildly behaving explosions. Thus, the parameterstability plot in Fig. 6 is wiggly in addition to being positive for . We remark that the scenario at a point of discontinuity of the parameterencoding map would be similar to the behavior at a point of discontinuity of the example in (1)). Note, during simulations, a very accurate approximation of turns out to be not essential when one is only interested in determining the continuity and discontinuity points of the parameterencoding map. Hence, even with 50 initial conditions, we can identify the discontinuous points in the parameterstability plot. All numerical simulations are computationally inexpensive and quick ( minutes).
Several crosschecks can be made to ensure that the idea of determining the ESP threshold via the parameterstability plot in Fig. 6 is not heavily influenced by the finite approximations of the parameterencoding map can be made. These include varying stepsize of (see [23, Fig. S4]), complementary plots that show a coordinate of the parameterencoding map and a coefficient that measures the deviation from a single pointcluster along with the parameterstability plot (see [23, Fig. S5]). Lastly, based on [21, (ii) of Theorem 4.1], we verify that the ESP threshold increases while the input is scaled (see [23, Fig. S6]) for details).
Discussion
We have provided a mathematical framework in which the stability of a parametricdriven system to a perturbation in the input or a design parameter can be analyzed and presented the results in Theorem 1 and Theorem 2. Our results are applicable beyond the RC schemes such as fully trained recurrent neural networks [6], and newer adaptations like deep recurrent neural networks [9], conceptors [14], databased evolving network models [22]. As an application of the design of an optimal scalar design parameter, we have shown how the hardESP threshold or the edgeofcriticality is identified as a particular discontinuity of the parameterencoding map (Theorem 3). We remark that when inputs are drawn from a stationary ergodic process, the echostate property threshold determined with a typical input is valid for all inputs in view of the  probabilistic law of the ESP property [21].
Our result to identify instabilities due to parameter is universal and not limited to a scalar parameter – it can be applied to any parameter of the system – for instance, the input or the reservoir matrix itself could be treated as a parameter. Theorem 2 would guarantee stability to these matrices (parameters) whenever the ESP property is satisfied. This explains the empirically observed robustness of echo state networks with respect to the random choice of reservoir or input matrices in applications. Such stability was explained in the context of a class of standard feedforward neural networks [11], but no such result is currently available for recurrent neural networks.
Future enterprising work would involve finding necessary conditions under which any input can be embedded into the reservoir. We caution the reader that the word embedding is colloquially used in some RC literature – embedding would mean that the input and its proxy or image in the reservoir subspace are homeomorphic images of each other. Some results on the existence of networks that can give an embedding are available in [8]. However, a practically more useful work should be directed towards finding the design of the reservoir so that inputs can be embedded on to the reservoir space.
Acknowledgements. The author thanks the National Research Foundation, South Africa, for supporting travel to the University of Pretoria in 2017 during which the work was initiated.
References
 [1] (2018) A note on nonautonomous discrete dynamical systems. In The Meeting in Topology and Functional Analysis, pp. 29–41. Cited by: Instability due to parameter, edgeofcriticality and the ESP threshold.
 [2] (2011) Information processing using a single dynamical node as complex system. Nature communications 2, pp. 468. Cited by: MemoryLoss is Fundamental for Stability and Distinguishes the Echo State Property Threshold in Reservoir Computing & Beyond.
 [3] (2009) Setvalued analysis. Springer Science & Business Media. Cited by: Continuity of the encoding and representation maps.
 [4] (2004) Realtime computation at the edge of chaos in recurrent neural networks. Neural computation 16 (7), pp. 1413–1436. Cited by: Instability due to parameter, edgeofcriticality and the ESP threshold, MemoryLoss is Fundamental for Stability and Distinguishes the Echo State Property Threshold in Reservoir Computing & Beyond.
 [5] (1985) Decay of correlations in dynamical systems with chaotic behavior. Zh. Eksp. Teor. Fiz 89, pp. 1452–1471. Cited by: Memory loss or Echo State Property.
 [6] (1994) Pruning recurrent neural networks for improved generalization performance. IEEE transactions on neural networks 5 (5), pp. 848–851. Cited by: Discussion.
 [7] (2018) Echo state networks are universal. Neural Networks 108, pp. 495–508. Cited by: MemoryLoss is Fundamental for Stability and Distinguishes the Echo State Property Threshold in Reservoir Computing & Beyond.
 [8] (2019) Embedding and approximation theorems for echo state networks. arXiv preprint arXiv:1908.05202. Cited by: Discussion, MemoryLoss is Fundamental for Stability and Distinguishes the Echo State Property Threshold in Reservoir Computing & Beyond.
 [9] (2013) Training and analysing deep recurrent neural networks. In Advances in neural information processing systems, pp. 190–198. Cited by: Discussion.
 [10] (1982) Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences 79 (8), pp. 2554–2558. Cited by: MemoryLoss is Fundamental for Stability and Distinguishes the Echo State Property Threshold in Reservoir Computing & Beyond.
 [11] (2006) Extreme learning machine: theory and applications. Neurocomputing 70 (13), pp. 489–501. Cited by: Discussion.
 [12] (2004) Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. science 304 (5667), pp. 78–80. Cited by: MemoryLoss is Fundamental for Stability and Distinguishes the Echo State Property Threshold in Reservoir Computing & Beyond.
 [13] (2001) The “echo state” approach to analysing and training recurrent neural networkswith an erratum note. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report 148 (34), pp. 13. Cited by: Memory loss or Echo State Property, MemoryLoss is Fundamental for Stability and Distinguishes the Echo State Property Threshold in Reservoir Computing & Beyond.

[14]
(2017)
Using conceptors to manage neural longterm memories for temporal patterns.
The Journal of Machine Learning Research
18 (1), pp. 387–429. Cited by: Discussion.  [15] (2003) Chaotic itinerancy. Chaos: An Interdisciplinary Journal of Nonlinear Science 13 (3), pp. 926–936. Cited by: Memory loss or Echo State Property.
 [16] (2011) Nonautonomous dynamical systems. American Mathematical Soc.. Cited by: MemoryLoss is Fundamental for Stability and Distinguishes the Echo State Property Threshold in Reservoir Computing & Beyond.
 [17] (2016) Design and analysis of a neuromemristive reservoir computing architecture for biosignal processing. Frontiers in neuroscience 9, pp. 502. Cited by: MemoryLoss is Fundamental for Stability and Distinguishes the Echo State Property Threshold in Reservoir Computing & Beyond.

[18]
(1990)
Computation at the edge of chaos: phase transitions and emergent computation
. Physica D: Nonlinear Phenomena 42 (13), pp. 12–37. Cited by: Instability due to parameter, edgeofcriticality and the ESP threshold.  [19] (2009) Reservoir computing approaches to recurrent neural network training. Computer Science Review 3 (3), pp. 127–149. Cited by: MemoryLoss is Fundamental for Stability and Distinguishes the Echo State Property Threshold in Reservoir Computing & Beyond.
 [20] (2002) Realtime computing without stable states: a new framework for neural computation based on perturbations. Neural computation 14 (11), pp. 2531–2560. Cited by: MemoryLoss is Fundamental for Stability and Distinguishes the Echo State Property Threshold in Reservoir Computing & Beyond.
 [21] (2013) Echo state property linked to an input: exploring a fundamental characteristic of recurrent neural networks. Neural computation 25 (3), pp. 671–696. Cited by: Memory loss or Echo State Property, Representation and Encoding maps, Numerical Results, Discussion.
 [22] (2017) Evolving network model that almost regenerates epileptic data. Neural computation 29 (4), pp. 937–967. Cited by: Discussion.
 [23] (2019) Supporting information that would be available soon. (), pp. . Cited by: Figure 1, Memory loss or Echo State Property, Representation and Encoding maps, Continuity of the encoding and representation maps, Continuity of the encoding and representation maps, Continuity of the encoding and representation maps, Continuity of the encoding and representation maps, Continuity of the encoding and representation maps, Continuity of the encoding and representation maps, Instability due to parameter, edgeofcriticality and the ESP threshold, Instability due to parameter, edgeofcriticality and the ESP threshold, Numerical Results, MemoryLoss is Fundamental for Stability and Distinguishes the Echo State Property Threshold in Reservoir Computing & Beyond.
 [24] (1990) Neuromorphic electronic systems. Proceedings of the IEEE 78 (10), pp. 1629–1636. Cited by: MemoryLoss is Fundamental for Stability and Distinguishes the Echo State Property Threshold in Reservoir Computing & Beyond.
 [25] (2014) Topology. Pearson Education. Cited by: Representation and Encoding maps, Continuity of the encoding and representation maps, Instability due to parameter, edgeofcriticality and the ESP threshold.
 [26] (2014) Experimental demonstration of reservoir computing on a silicon photonics chip. Nature communications 5, pp. 3541. Cited by: MemoryLoss is Fundamental for Stability and Distinguishes the Echo State Property Threshold in Reservoir Computing & Beyond.
 [27] (2013) Topological and dynamical complexity of random neural networks. Physical review letters 110 (11), pp. 118101. Cited by: MemoryLoss is Fundamental for Stability and Distinguishes the Echo State Property Threshold in Reservoir Computing & Beyond.
 [28] (2012) Revisiting the echo state property. Neural networks 35, pp. 1–9. Cited by: Numerical Results.
Comments
There are no comments yet.