Reservoir Computing (RC) 
denotes a class of Recurrent Neural Networks (RNNs) featured by untrained recurrent connections. Essentially, RC networks are composed by a non-linear dynamical system, calledreservoir, and a (typically) linear output layer, called readout. Training is restricted to just the readout component, leading to an extremely efficient way of designing RNNs. When the dynamical reservoir is described in terms of discrete-time iterated maps, the model is known under the name of Echo State Network (ESN) . Besides the striking efficiency advantage, the study of ESNs allows investigating the architectural properties of RNN systems apart from (or prior to) learning of recurrent connections. A major question arising in this context is how to enforce stability of dynamics at the stage of network initialization. This is typically approached by requiring the reservoir to satisfy an asymptotic stability property called Echo State Property (ESP) . Known literature conditions for the ESP mainly focus on analyzing algebraic properties of the recurrent weight matrix, but typically fail to effectively take into account also the fundamental effects of driving input signals on the reservoir behavior. In this paper, we take a empirical perspective to the study of stability of input-driven dynamical reservoir systems and propose a simple approach to assess the validity of the ESP for given input signals. The proposed methodology is demonstrated on two benchmark real-world datasets, showing interesting insights on the dynamical properties of ESN models.
2 Echo State Property Index
We study the behavior of non-linear discrete-time dynamical reservoirs, described in terms of a state-space model by a state transition function111Bias terms are not reported for conciseness. :
where is the state space, is the input space, is the state at time , and is the external input at time . The reservoir parameters are collected in and , respectively denoting the recurrent reservoir matrix and the input-to-reservoir weight matrix. Crucially, both and are kept untrained after initialization. In our notation, and are used to indicate the reservoir size and the input dimension, respectively. We use
to represent the element-wise application of a non-linear activation function. In our analysis, we shall assume that both the input and the state spaces are compact sets, where the latter condition is always ensured whenis a bounded non-linearity. In particular, as common is ESN settings, we use . The reservoir system is coupled with a readout tool, which at every time computes the output of the ESN, i.e. , through the linear combination: . We use to denote the output space. The elements in
represent the only trainable weights of the model, which are typically found in closed form by using direct methods, e.g. ridge-regression.
Given an input signal and an initial state , the dynamics of the reservoir ruled by eq. 1 evolve describing an orbit . Typically, a zero initial state is used, i.e. . In this context, it is also convenient to introduce an iterated version of eq. 1, i.e. , such that given any initial state and any input sequence , is the network state that results after iterated application of eq. 1 under the influence of the input signal . As the reservoir’s parameters in and do not undergo a training process, it is fundamental to impose constraints on their initialization, in order to ensure dynamical stability in applications. This requirement is typically expressed by means of the Echo State Property (ESP) . An ESN whose reservoir dynamics are governed by eq. 1 is said to satisfy the ESP whenever for every initial conditions , and for any input sequence of length , i.e. , it holds that:
Essentially, the ESP expresses the requirement of reservoir dynamics characterized by global asymptotic (Lyapunov) stability under the influence of the driving input. In other words, the orbit of the reservoir in the state space should be asymptotically determined uniquely by the input signal, and the influence of initial conditions should progressively fade away.
Literature works provided two kind of conditions for the ESP, typically in relation to algebraic properties of . Sufficient conditions for the ESP were initially developed by studying eq. 1 as a contraction mapping. In such a case, indeed, the reservoir can be shown to satisfy the ESP for any input sequence . By nature, this kind of conditions is very restrictive, and the tighter bound known to date is related to the diagonal Schur stability of [5, 6]. More recently, the work in  shifted the attention to analyzing the properties of the reservoir dynamics under the influence of driving external signals, providing an improved formulation of the sufficient condition for the ESP linked to an input. These contributions are summarized in the sufficient condition for the ESP that we give in the following eq. 3:
where , and is the indicator function that is when its argument is true, and otherwise. Besides, a well known necessary condition for the ESP  originates from the study of asymptotic stability of the reservoir system in eq 1, linearized around the zero state and in presence of zero input. Under these assumptions, a necessary condition for the ESP is given in the following eq. 4:
where denotes the spectral radius
of its matrix argument, i.e. the maximum among the absolute values of its eigenvalues. Although the condition in eq.4 is often used in most practical cases, it is worth noticing that for non-trivial (i.e., non-zero) input signals it is neither sufficient nor necessary for the ESP.
In this paper, we take a concrete perspective to the problem of assessing if the ESP is satisfied for a specific input signal by a given dynamical reservoir, aiming at estimating its degree of global asymptotic stability. To this end, we introduce anESP index, which expresses the average deviation of reservoir orbits generated from random initial conditions to a reference orbit achieved starting from the zero state. Essentially, if there exists a unique orbit that globally attracts all the reservoir dynamics under the influence of the same input signal (i.e., if the ESP index is zero), then we can claim that the ESP is empirically satisfied. The steps used to compute the proposed ESP index are detailed in Algorithm 1. In evaluating both the randomly initialized and the reference reservoir orbits, we consider dynamics ruled by eq. 1 driven by the same time-steps long input signal, discarding the states in the first time-steps as initial transient. The ESP index is averaged over randomly generated initial conditions, and deviation between orbits is measured in terms of Euclidean distance.
We applied the introduced methodology for empirical assessment of the ESP considering two real-world benchmark datasets in the RC area. The first one is the Santa Fe Laser dataset, which consists in a time-series of normalized sampled intensities from an infra-red laser in chaotic regime. The second one is a Sunspot dataset, i.e. a time-series of monthly averaged sunspot numbers222Taken from http://sidc.be/silso/home., considered from January 1749 to September 2018. In our experimental setting, we considered ESNs with 100 (fully connected) reservoir units. We initialized reservoir matrices in eq. 1
randomly from a uniform distribution over, re-scaling them to control two major reservoir hyper-parameters, namely the spectral radius , and the input scaling . We explored values of between and , with step , and values of from to , with step . For all the considered reservoir configurations, we computed the value of the ESP index according to Algorithm 1, using the first time-steps of the input from each dataset, with a transient of , and a number of random initial states of . For every configuration we also assessed whether the corresponding reservoir satisfies the sufficient and the necessary conditions, respectively expressed by eq. 3 and 4. Moreover, in the explored settings, we evaluated the performance of the ESNs in the next-step prediction tasks defined on the considered datasets. For the Laser task we used the first 5000 time-steps for training and the remaining 5092 for test. For the Sunspot task, training and test sets contained respectively 3000 and 237 time-steps, and input values were scaled by a factor of 1000. Readout training was performed by ridge-regression, with regularization coefficient determined using the OCReP method . As common in ESNs applications, training discarded an initial washout (of length 1000 in our experiments). For each reservoir configuration, all results were averaged over a number of realizations (for random initialization).
Fig. 1 shows the achieved results on the Laser (Fig. 1) and Sunspot (Fig. 1) datasets. The main plot shows the averaged ESP index values achieved in correspondence of the explored reservoir settings (scaled to a maximum value of for graphical representation convenience), where darker colors indicate smaller values, and a value of indicates that the ESP is empirically verified. For convenience, upper bounds to the set of configurations satisfying the necessary and the sufficient conditions are graphically shown as well. The smaller plot shows the test errors achieved on the prediction task in correspondence of the same reservoir configurations, expressed in terms of .
Results in Fig. 1 are illuminating in several ways. First of all, we can observe that in both the analyzed cases the set of hyper-parametrizations that empirically satisfy the ESP comprises reservoir configurations well beyond those encompassed by the commonly adopted ESP conditions. Interestingly, the ESP index plots also empirically confirm the intrinsic stabilizing effect of increasing input sizes on RNN dynamical regimes. Extending our analysis also to the performance plots we see that the domain of empirical ESP validity, bounded by a region characterized by a sharp stable-unstable transition of recurrent dynamics, nicely correspond to the set of reservoir configurations with lower generalization error (darker points in the smaller plots), with a sharp performance decay around the ESP validity border. In this sense, our analysis also reveals that a large portion of “good” reservoirs are usually neglected in common RC practice.
We have studied global asymptotic stability constraints for untrained reservoir dynamics driven by external input. Assuming a concrete perspective in our analysis, we have proposed an empirical ESP index, which enables to easily assess whether an ESN shows stable dynamics in presence of given input signals. The proposed approach has been demonstrated on RC benchmark datasets revealing interesting insights on the actual stability properties of input-driven reservoirs. Looking ahead, the work described in this paper allows us to foresee automatic task-dependent approaches for effectively setting RC configurations. Besides, it would be a useful starting point to drive theoretically-oriented studies aiming at unleashing the true potentialities of RC models in real-world applications.
-  M. Lukoševičius and H. Jaeger. Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3):127–149, 2009.
-  H. Jaeger and H. Haas. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science, 304(5667):78–80, 2004.
-  H. Jaeger. The ”echo state” approach to analysing and training recurrent neural networks - with an erratum note. Technical report, GMD - German National Research Institute for Computer Science, 2001.
-  C. Gallicchio and A. Micheli. Architectural and Markovian factors of Echo State Networks. Neural Networks, 24(5):440–456, 2011.
-  I.B. Yildiz, H. Jaeger, and S.J. Kiebel. Re-visiting the echo state property. Neural networks, 35:1–9, 2012.
-  M. Buehner and P. Young. A tighter bound for the echo state property. IEEE Transactions on Neural Networks, 17(3):820–824, 2006.
-  G. Manjunath and H. Jaeger. Echo state property linked to an input: Exploring a fundamental characteristic of recurrent neural networks. Neural computation, 25(3):671–696, 2013.
-  R. Cancelliere, M. Gai, P. Gallinari, and L. Rubini. OCReP: An Optimally Conditioned Regularization for pseudoinversion based neural training. Neural Networks, 71:76–87, 2015.