Analysis of Memory Capacity for Deep Echo State Networks

06/11/2019 ∙ by Xuanlin Liu, et al. ∙ Virginia Polytechnic Institute and State University 0

In this paper, the echo state network (ESN) memory capacity, which represents the amount of input data an ESN can store, is analyzed for a new type of deep ESNs. In particular, two deep ESN architectures are studied. First, a parallel deep ESN is proposed in which multiple reservoirs are connected in parallel allowing them to average outputs of multiple ESNs, thus decreasing the prediction error. Then, a series architecture ESN is proposed in which ESN reservoirs are placed in cascade that the output of each ESN is the input of the next ESN in the series. This series ESN architecture can capture more features between the input sequence and the output sequence thus improving the overall prediction accuracy. Fundamental analysis shows that the memory capacity of parallel ESNs is equivalent to that of a traditional shallow ESN, while the memory capacity of series ESNs is smaller than that of a traditional shallow ESN.In terms of normalized root mean square error, simulation results show that the parallel deep ESN achieves 38.5 traditional shallow ESN while the series deep ESN achieves 16.8

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Reservoir computing (RC) is a state-space model that follows a fixed state transition structure (known as a reservoir) and an adaptable output [1]

. Echo state network (ESN) is considered as one of the simplest forms of the RC model that is useful for processing time series. An ESN typically uses ten times less neurons compared to other recurrent neural networks (RNNs). Furthermore, only the output weight matrix in an ESN needs to be trained

[2]. Due to their effectiveness and training simplicity, ESNs have been widely used in many fields [3, 4, 5, 6, 7, 8, 9, 2, 10], including time series prediction, wireless networks, and unmanned aerial vehicle control and communication. However, due to the simple structure and randomness of the weight matrices, ESN faces many challenges that include minimizing prediction errors and enhancing prediction accuracy for highly complex systems.

The existing literature such as [11, 12, 9, 8, 13, 14, 15] has studied a number of problems related to ESNs. The authors in [8] introduced a bat algorithm to overcome the influences of initial random weights, thereby improving the effectiveness and robustness of the ESN prediction system. The work in [9]

proposed a Kalman filter to improve ESN predictions by recursively training the network output weight. The authors in

[11] and [12] proposed simple cycle reservoirs and cycle reservoirs with jumps so as to shorten the trail session for a specific system. The authors in [13] proposed a scale-free highly-clustered ESN (SNESN) to capture a large number of features of the ESN input stream for better predictions. In [14], the authors proposed a deep self-organizing SHESN so as to construct a large system with a stack of well-trained reservoirs, which improves the prediction ability of the network. However, most of the existing literature such as [11, 12, 9, 8, 13, 14] only tested the prediction capability of ESNs with various datasets but did not focus on the theoretical analysis. The author in [15] introduced the concept of a short-term memory capacity of ESN to provide a quantitative measure of the prediction capability. The work in [16] proposed an ESN-based algorithm and analyzed the memory capacity of the ESN to predict the content request distribution and mobility pattern for mobile users in cloud radio access networks. However, the work in [15] and [16] did not propose a structural solution to minimizing prediction errors stemming from ESN, particularly when dealing with highly complex systems.

The main contribution of this paper is to propose new architectures of ESNs and evaluate their capability of recording the historical data. To our best knowledge, this is the first work that analyzes the prediction capability of deep ESNs. In this regard, our key contributions include:

  • To improve and stabilize the prediction accuracy of ESN, we propose deep ESN architectures composed of multiple reservoirs in parallel-connection and series-connection, respectively. The parallel ESN decreases the prediction error by averaging outputs of multiple ESNs. The series ESN captures more features for complex system than the traditional shallow ESN and improves the prediction accuracy.

  • We analyze the memory capacity [15] of deep ESNs and provide a measure to evaluate the historical data memory of deep ESNs. The parallel architecture keeps the memory capacity of the traditional shallow ESN for recording historical data, while the series architecture misses more historical data than the shallow ESN.

  • Simulation results show that the normalized root mean square error (NRMSE) is reduced by 38.5% in the parallel ESN and 16.8% in the series ESN compared with the traditional shallow ESN.

The rest of this paper is organized as follows. The preliminaries of ESN and the proposed parallel and series deep ESN architectures are introduced in Section II. In Section III, the memory capacity of deep ESNs is analyzed. Numerical simulation results are presented and analyzed in Section IV. Finally, conclusions are drawn in Section V.

Ii Deep ESNs

In this section, we first introduce the architecture of a traditional shallow ESN. To improve the prediction accuracy of ESNs, we propose two novel deep architectures that rely on ESN: a parallel architecture and a series architecture.

Ii-a Echo State Networks: Preliminaries

Fig. 1: The echo state network architecture.

An ESN consists of input units, reservoir units, and output units, as shown in Fig. 1. The activations of input, reservoir, and output units at time are given by , , and , respectively. The input matrix represents the transformation from the input units to the reservoir units. The reservoir updating matrix represents the updating rule of the reservoir units over time. The output matrix represents the transformation from reservoir to output units.

and are constant matrices whose elements take values that are generated randomly in before the training of ESNs. The reservoir updating matrix is scaled as follows: , where is the spectral radius of and is a scaling parameter [17]. The reservoir units and output units are updated with time as follows:

(1)
(2)

where

is the activation function,

can be typically defined as a sigmoid or function [1].

An ESN can be trained offline and the output weight matrix

is calculated using ridge regression

[18] as follows:

(3)

where holds the reservoir states,

is the identity matrix,

is the regularization factor greater than 0, and

is a vector of output of training sequence. Since ESN is one of the simplest forms of reservoir computing, the prediction accuracy is affected by the architecture of the ESN, which is generated randomly, as well as, the complexity of the predicted system. Next, we introduce the deep ESNs as a structural solution to improve the prediction accuracy.

Ii-B Parallel ESN

Fig. 2: Parallel ESN architecture with reservoirs.

The architecture of a parallel ESN is shown in Fig. 2. An input sequence enters reservoirs simultaneously. and represent the input matrix and output matrix for reservoir, . The input matrices and reservoir updating matrices are considered to be constant and generated randomly before training. The training process is similar to that of shallow ESNs and all reservoirs can be trained simultaneously. The output matrices of reservoirs are determined after training. The units in reservoirs and the output units are updated as follows:

(4)
(5)

The output of a parallel ESN is the arithmetic mean of reservoir outputs, which is given by:

(6)

By taking the average of reservoir outputs, the parallel architecture decreases the prediction error and improves the accuracy. Furthermore, compared to a shallow ESN with neurons, a parallel ESN with reservoirs costs less in training since it only needs to train output matrices of instead of a relatively large output matrix of .

Ii-C Series ESN

Fig. 3: Series ESN architecture with reservoirs.

A series ESN consists of reservoirs, as shown in Fig. 3. An input sequence enters into the first reservoir. The output of the previous reservoir will be the input of the following one. Similarly, the input matrices and reservoir updating matrices are constant and generated randomly before training. A series ESN is trained sequentially. The first reservoir is trained to predict the output sequence based on the input sequence. reservoirs are all trained to predict the output of a nonlinear system. The difference between training a series ESN and a parallel ESN is that the first reservoir is trained with the input of the system, while any subsequent reservoir is trained with the output of the previous reservoir. The output matrices are determined after training. Units in reservoir and the output of reservoir are given by:

(7)
(8)

where and . is the output of a series ESN.

Iii Short-Term Memory Capacity of Deep ESN Architecture

In this section, we analyze the short-term memory capacity (MC) of deep ESNs. The short-term MC is used to quantify the memory capability of a recurrent network architecture for recording information from the input stream [1].

The MC is defined as the squared correlation coefficient between the desired output (-time-step delayed input signal, ) and the observed network output , which can be given as [15]:

(9)

where and

denote the covariance and variance operators, respectively. The short-term MC is defined as:

(10)

To quantify the MC of deep ESNs, we first assume that the reservoir updating matrix is given by:

(11)

where represents the reservoir weight, which is set to a constant value prior to ESN training. Based on [1, Appendix B], we define auxiliaries to simplify the derivation as follows: 1) the rotation operator used for cyclically rotating the elements in a places to the right; 2) the extension matrix of , ; 3) the diagonal matrix

; 4) the invertible matrix

. Furthermore, we leverage the following result [1]: , . We can also deduce that , and . Then, the MC of a parallel ESN can be given by the following theorem.

Theorem.

In a parallel ESN which consists of single ESNs connected in parallel, we assume that each single ESN’s input matrix guarantees a regular matrix , . Then, the MC of the parallel ESN is .

Proof:

The input stream is zero-mean real-valued. The activations of units in reservoir at time are given by:

(12)

The output matrix of reservoir is , where the covariance matrix of reservoir ’s activations is:

(13)

and the expectation of the product of reservoir ’s activations and -slot delayed source is given by:

(14)

Hence, the output matrix of reservoir will be:

(15)

The output at time is given by:

In the parallel architecture, the total output is defined as . Hence, the covariance of the output with the -slot delayed source can be calculated as:

(16)

The variance of the observed output will be:

(17)

where

(18)

and has similar results to (12).

Hence,

(19)

The squared correlation coefficient between the desired output and the network output is given by:

(20)

The MC of the parallel ESN can be derived as follows:

(21)

From Theorem III, we can see that the theoretical MC of the parallel ESN increases with the reservoir size and reservoir weight . However, the MC of parallel ESN is similar to that of ESN derived in [1, Theorem 1]. From the perspective of the network structure, the parallel ESN duplicates ESNs and averages the results, instead of changing the internal structure of the ESN. The advantage of the parallel structure is that it can reduce the prediction error by averaging several reservoir outputs, thereby improving the prediction accuracy.

Theorem III also shows that the theoretical MC of an ESN can never exceed the reservoir size , which is also the reservoir’s maximum storage for recoding the input streams. The difference between the maximum storage and the theoretical MC implies that an ESN cannot record all the historical data from the input sequence. For the cascading architecture of the series ESN, reservoirs mean that the MC is constrained by each reservoir, i.e., it decreases times. In consequence, the MC of the series ESN will be smaller than that of the traditional shallow ESN and the parallel ESN. Due to space limitations, the theoretical analysis of the MC for the series ESN is left for future work.

Iv Simulation Results

Parameters Values
3
1
1
500 (parallel)/700 (series)
500 (parallel)/700 (series)
100
TABLE I: Simulation Parameters

Next, we evaluate the prediction capability of the proposed deep ESNs by using the normalized root mean square error (NRMSE) metric:

(22)

where is the predicted output,

is the target output,

is the mean operator, and is the variance operator.

The dataset is produced from the nonlinear autoregressive moving average (NARMA) system [19]. The NARMA system is a discrete time system, whose current output depends on both the current input and the previous output. The nonlinearity and recursiveness of NARMA make it difficult to model. We use a fixed-order NARMA time series as the dataset:

(23)

where is the system output at time , is the system input at time

, that is generated randomly over a uniformly distribution in the range of

, and captures the dependency length. We denote the length of the training and test sequences by and . The first predicted outputs from the training and test sequences are ignored in the calculation of the NRMSE. This is because the reservoir is unstable during the initial training period. Hence, the predicted result during this period is meaningless. Here, we need to note that the amount of the ignored output in a series ESN should be times larger than that in a parallel ESN. This is because the data in a series ESN needs to enter reservoirs sequentially. Every time the data enters in a reservoir, the first output will be inaccurate and should be ignored. Hence, the prediction accuracy of the series ESN can be comparable to the accuracy of the parallel ESN. Our detailed simulation parameters are listed in Table I.

(a) Training set
(b) Test set
Fig. 4: Training result of a parallel ESN (, ).
(a) Training set
(b) Test set
Fig. 5: Training result of a series ESN (, ).

Figs. 4 and 5 show the training results of a parallel ESN and a series ESN by contrasting the predicted sequence with the training sequence. Figs. 4(a) and 5(a) show that, when the training sets are tested, the predicted sequences almost fit the training sequences. This result shows that the parallel ESN and the series ESN are well-trained. In Figs. 4(b) and 5(b), we can see some gap between the predicted sequence and the training sequence. The NRMSE of the outputs also increases. This is due to the fact that, when using the test sets, the prediction accuracy is a little lower than when using the training sets. This reduction of the prediction accuracy reveals that the trained parallel ESN and the trained series ESN are not overfitted.

Fig. 6 shows that the NRMSE decreases as the reservoir size increases, for all considered ESN architectures. This is due to the fact that the ESN memory capacity is proportional to the reservoir size. Fig. 6 also shows that parallel ESNs have the lowest NRMSE among the three kinds of ESNs. This is due to the fact that, compared to a traditional shallow ESN and a series ESN, a parallel ESN not only mitigates prediction error, but also has the best MC in all considered ESNs. From Fig. 6, we can see that the NRMSE of series ESNs becomes lower than that of single ESNs after the reservoir size increases to 30. This is due to the fact that the series architecture extracts new features and relations between input and output sequences so as to achieve a more accurate prediction. Fig. 6 shows that the NRMSE of series ESNs is larger than the NRMSE of shallow ESNs when the reservoir size is 10 or 20. This is due to the fact that, when the reservoir size is small, the prediction accuracy of a single ESN is low so that the series ESN captures inaccurate features between the input and output sequences. These inaccurate features lead to a low prediction accuracy for the series ESN. When the reservoir size is relatively small and the prediction result is relatively inaccurate, a prediction error will have a significant impact on the prediction performance. When the reservoir size is , the NRMSE of the traditional ESN is 0.2048, while the NRMSE resulting from the parallel ESNs and the series ESNs will be 0.1259 and 0.1704. Compared to the traditional ESN, the parallel ESN achieves 38.5% reduction and the series ESN achieves 16.8% reduction in terms of the NRMSE.

Fig. 6: NRMSE as the reservoir size varies ().

Fig. 7: NRMSE as the dependency length varies ().

Fig. 7 shows that the prediction accuracy of the all considered ESNs changes as the dependency length of the NARMA system varies. In Fig. 7, as the dependency length increases, the prediction accuracy decreases. This is due to the fact that a larger dependency length leads to a more complex NARMA system with a higher order. With the same internal structure, the prediction accuracy of an ESN decreases as the target system becomes more complex. Fig. 7 also shows that series ESNs has the lowest NRMSE as increases. This is due to the fact that the series ESN cascades multiple reservoirs, so that more features and relations between the input and the output are extracted. Based on that, the series ESN yields higher prediction accuracy for more complex systems compared to the traditional ESN and the parallel ESN.

Fig. 8: NRMSE as the reservoir weight varies ().

Fig. 8 shows that the NRMSE of traditional ESNs and parallel ESNs decrease as the reservoir weight increases, but the NRMSE of series ESNs increases. This is because the MC of parallel ESNs is proportional to as well as the MC of traditional ESNs.

V Conclusion

In this paper, we have proposed two novel deep ESN architectures, parallel ESN and series ESN. Compared to a traditional shallow ESN, a parallel ESN decreases the prediction error by averaging multiple separate reservoir outputs, while the series ESN captures new features to predict the system output by cascaded training. We have also analyzed the MC of the parallel deep ESN and we have shown that the MC does not exceed and is arbitrarily close to the reservoir size. Simulation results show that deep ESNs can decrease the prediction error compared to a traditional shallow ESN. In particular, the parallel ESN yields a 38.5% reduction in terms of the NRMSE, and the series ESN yields a 16.8% reduction.

References

  • [1] A. Rodan and P. Tiňo, “Minimum complexity echo state network,” IEEE Transactions on Neural Networks, vol. 22, no. 1, pp. 131–44, Dec. 2011.
  • [2] H. Jaeger and H. Haas, “Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication,” Science, vol. 304, no. 5667, pp. 78–80, 2004.
  • [3] M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, “Machine learning for wireless networks with artificial intelligence: A tutorial on neural networks,” arXiv preprint arXiv:1710.02913, Oct. 2017.
  • [4] I. Abadlia, M. Adjabi, I. Benouareth, and H. Bouzeria, “ESN for MPP tracking and power management in photovoltaic — hydrogen hybrid system,” in Proc. of 5th International Conference on Electrical Engineering – Boumerdes, Boumerdes, Algeria, Oct. 2017.
  • [5] M. Chen, M. Mozaffari, W. Saad, C. Yin, M. Debbah, and C. S. Hong, “Caching in the sky: Proactive deployment of cache-enabled unmanned aerial vehicles for optimized quality-of-experience,” IEEE Journal on Selected Areas in Communications, vol. 35, no. 5, pp. 1046–1061, May 2017.
  • [6] B. Pugach, B. Beallo, D. Bement, S. McGough, N. Miller, J. Morgan, L. Rodriguez, K. Winterer, T. Sherman, S. Bhandari, and Z. Aliyazicioglu, “Nonlinear controller for a uav using echo state network,” in Proc. of International Conference on Unmanned Aircraft Systems, Miami, FL, USA, June 2017.
  • [7] M. Mozaffari, W. Saad, M. Bennis, and M. Debbah, “Unmanned aerial vehicle with underlaid device-to-device communications: Performance and tradeoffs,” IEEE Transactions on Wireless Communications, vol. 15, no. 6, pp. 3949–3963, June 2016.
  • [8] L. Pan, J. Cheng, H. Li, Y. Zhang, and X. Chen, “An improved echo state network based on variational mode decomposition and bat optimization for internet traffic forecasting,” in Proc. of IEEE 17th International Conference on Communication Technology, Chengdu, China, Oct. 2017.
  • [9] X. Peng, H. Dong, and B. Zhang, “Echo state network ship motion modeling prediction based on Kalman filter,” in Proc. of IEEE International Conference on Mechatronics and Automation, Takamatsu, Japan, Aug. 2017.
  • [10] M. Chen, W. Saad, and C. Yin, “Virtual reality over wireless networks: Quality-of-service model and learning-based resource management,” IEEE Transactions on Communications, to appear, 2018.
  • [11] A. Rodan and P. Tiňo, “Simple deterministically constructed recurrent neural networks,” in Proc. of International Conference on Intelligent Data Engineering and Automated Learning, Paisley, UK, Sep. 2010.
  • [12] ——, “Simple deterministically constructed cycle reservoirs with regular jumps,” Neural Computation, vol. 24, no. 7, pp. 1822–1852, July 2012.
  • [13] Z. Deng and Y. Zhang, “Complex systems modeling using scale-free highly-clustered echo state network,” in Proc. of IEEE International Joint Conference on Neural Network, Vancouver, BC, Canada, July 2006.
  • [14] Z. Deng, C. Mao, and X. Chen, “Deep self-organizing reservoir computing model for visual object recognition,” in Proc. of International Joint Conference on Neural Networks, Vancouver, BC, Canada, July 2016.
  • [15] H. Jaeger, “Short term memory in echo state networks,” in GMD Report, Jan. 2001.
  • [16] M. Chen, W. Saad, C. Yin, and M. Debbah, “Echo state networks for proactive caching in cloud-based radio access networks with mobile users,” IEEE Transactions on Wireless Communications, vol. 16, no. 6, pp. 3520–3535, June 2017.
  • [17] H. Jaeger, “Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the echo state network approach,” in GMD Report, Jan. 2002.
  • [18] F. Wyffels, B. Schrauwen, and D. Stroobandt, “Stable output feedback in reservoir computing using ridge regression,” in Proc. of International Conference on Artificial Neural Networks, Prague, Czech Republic, Sep. 2008.
  • [19] A. F. Atiya and A. G. Parlos, “New results on recurrent network training: unifying the algorithms and accelerating convergence,” IEEE Transactions on Neural Networks, vol. 11, no. 3, pp. 697–709, May 2000.