Critical Echo State Networks that Anticipate Input using Morphable Transfer Functions

06/12/2016 ∙ by Norbert Michael Mayer, et al. ∙ 0

The paper investigates a new type of truly critical echo state networks where individual transfer functions for every neuron can be modified to anticipate the expected next input. Deviations from expected input are only forgotten slowly in power law fashion. The paper outlines the theory, numerically analyzes a one neuron model network and finally discusses technical and also biological implications of this type of approach.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recurrent neural networks (RNNs) with input are examples for non-autonomous dynamical systems. One fundamental property is their dependence on their initial states (i.e. the initial settings of the recurrent layer neurons) with regard to one given input sequence. On one hand and for obvious reasons, networks that sensitively and for all future states depend on the setting of the initial state, will not work very well. On the other hand, if the network forgets too fast information about the past, it essentially works in the same way as a feed-forward network, and if that is good enough for the given task it is much easier to replace the recurrent network with the feed-forward solution. In the field of reservoir computing[1, 2] and particular in the case of echo state networks (ESNs)[3, 4, 5]

much efforts have been undertaken in order to quantify to which extend an RNN is sensitive to the initial state. As a result several methods exist to detect the fine line between network parameters that – in combination with a given input sequence – finally result in a forgetting of the past within the network versus such parameter values for which essentially differences in the initial settings prevail in all future. More interestingly, heuristics show that parameter settings that are near the border line, however on the side of the forgetting type of networks, show the best performance for certain relevant tasks

[6, 7, 8]. These networks are called near critical networks. An important notice from experimental biology is that also the statistics of dynamics of neurons in brain slices hint towards a near critical or even critical tuning of biological neurons in the brain[9]. Practical state of the art near critical networks usually require a certain margin towards the critical state because by design unexpected input deviations may push the state of the network over the critical point, in which case the performance deteriorates. In contrast to near critical networks, a relatively new study[10, 11] brought up the idea to train the synaptic weights of the recurrent layer in the way that certain points (so-called epi-critical points, ECPs) within the transfer function are hit. If the network receives unexpected input these special points are missed and result in an under-critical behavior. Given an expected input the resulting network is tuned exactly to the critical point; other network features are power law forgetting of an unexpected input if it is succeeded by a sequence of expected input. Although that approach lines up a complete and new concept of designing critical ESNs, for practical purposes, there are still some problems. Most important, the proposed learning algorithm does not guarantee for a good performance of the network for many tasks. Different from that approach the present work does not apply learning to the input weights and the recurrent weights. Rather, it proposes adaptive transfer functions for each neuron where the ECPs are always shifted towards the next expected transition point.

2 Echo State Networks and Criticality

The system is intended to resemble the dynamics of a biological recurrent neural network. We follow here the notation of Jäger:

(1)
(2)

where are items that form a left infinite time series that in total are called Supervised learning is done by linear regression using as input[3], represents activity in the hidden layer. and are matrices that represent (constant) synaptic weights. In principal, the complete time series is determined by the tuple of the initial state and a time series . Comparing two time series and , that start with any combination of two different one quantify how the difference develops over time. Important for the definition of echo state networks is the concept of state contraction that is if

(3)

i.e. the Euclidean distance converges to zero. In combination with the assumption that the processing of the neural network is acting in a time invariant manner, the concept has been named uniformly state contracting system[3]. Uniformly state contracting networks are echo state networks and thus capable to learn by linear regression. There has been some confusion about the definition of uniformly state contracting networks. Some researches define it to describe the dynamics of a network with regard to a specific input sequence (cf. [12]). Within this paper networks are called uniformly state contracting only if for a given network the relation of eq. 3 holds for any input. Some calculus shows that a network with the dynamics of eq. 2 is always an ESN if the recurrent connectivity matrix is orthogonal () and the derivative of the transfer function is in the range . Single inflection points, where may be permissible [3, 11]. These points are important in the following considerations, and are called epi-critical points (ECPs). In analogy for calculating the Lyapunov exponent in autonomous dynamic systems one can define also a Lyapunov exponent for ESNs with regard to a certain input sequence [13, 11]

(4)

If the Lyapunov exponent for all is negative, the network is shown to be uniformly state contracting and thus is an ESN. If the Lyapunov exponent for any input time series is positive the network is not an ESN. In addition, it is worthwhile to introduce the following definition: A network that for some input sequences has a Lyapunov exponent of zero but for which the eq. 3 still holds for any input sequence shall be called a critical ESN. According to this definition an ESN is critical with respect to a particular input sequence. Technically, this can be achieved by training the network towards a setting where for some input sequences . A deeper insight into the theory behind that formula can be found in [10, 11].That is to direct the input to those single inflection points of the transfer function.

Figure 1: The plots show examples of two versions of a transfer function according to eqs. 56 as blue curve where the ECPs can be organized in an adaptive way. The green curve depicts the underlying function on which all ECPs are located. Green dots indicate ECPs for this particular example. The version at the right side is the version that is used in sect. 3. Here areas of the transfer function where have been avoided which leads to better results in the following graphs. Since the derivative of is the point is marked as an additional ECP in both plots.

3 Adaptive transfer functions

Instead of implementing plasticity on and , the proposal for the current work is to implement the plasticity on the shape of the transfer function . Assuming that each neuron has an intrinsic mechanism to predict several possible values of

(that is one item in the vector

) the prediction happens before the input is perceptible to the neuron (compare [14, 15] for another scenario for self-prediction in ESNs). The neuron does not need to restrict itself to one prediction, instead a list of those values are possible. So, the ECPs should be shifted towards the predicted values of the linear response. The transfer function around an ECP () can thus be defined as

(5)

or, else

(6)

The detailed arithmetic of such a function is more complicated and would take much space to be detailed out here (for example it needs to be defined what is the function value between two values that are located nearby each other.) Fig. 1 depicts two possible resulting transfer functions. The transfer function is designed in the way that .

4 Synthetic one neuron reservoirs

In order to illustrate the proposed methodology we designed an a reservoir with one neuron that expects an alternating input of s and s. The following update equation can then be used

(7)

where the factor takes the role of a matrix and the factor the role of . Note that if one may call

a one dimensional orthogonal matrix. Since for any

and in case the network receives the expected alternating input the linear response converges to . So, two ECPs may be used and =1. With regard to the resulting transfer function (that includes the ECPs) one can now measure the Lyapunov exponent according to eq. 4 and for differing values of . Fig. 2 (left) depicts the results. One can see that although independent from for the predicted input the dynamics are always alternating s and s, this dynamic is only stable for the range of between and . In this range the network is an echo state network, that is under-critical if . Further numerical tests and also theoretical considerations show that at the point the network is still an ESN, however critical. For values of the network is not an ESN anymore. The purpose of this work is to propose ESNs of as optimal critical ESNs.

Figure 2: Left: Depicted is the Lyapunov exponent for the example system of eq. 7 for different values of . At the Lyapunov exponent crosses zero if the input sequence is the expected alternating sequence of 1s and -1s. Right: Lyapunov exponents for two one neuron networks: In both cases the amplitude of the alternating input is varied in a series of measurements of the Lyapunov exponent. In the case of the network according to eq. 7 (blue) one can see that the Lyapunov exponent never becomes larger than 0. In the case of eq. 8 (green) positive Lyapunov exponents occur if the amplitude of the input is larger than the critical value.

For comparison one may consider a one neuron version of a traditional near edge of chaos approach which basically relate to the common experience that the given theoretical limits for the ESNs can be significantly overtuned for many practical time series. Those overtuned ESNs in many cases show a much better performance than those that actually obey Jaegers initial limit. So recently researchers came up with theoretical insights with regard to ESNs that are subject to a network and a particular input statistics [12] which fundamentally relate a network and an input statistics to a limit. One might assume that those approaches show similar properties as the one that has been presented above. However, for a good reason those approaches all are coined as ’near edge of chaos’ approaches. In order to illustrate the problems that arise from those approaches one may consider what happens if those overtuned ESNs are set exactly to the critical point. Here, just for the general understanding one may consider again a one neuron network and a as a transfer function, so

(8)

Note that the ESC limit in outlined above requires that the recurrent connectivity should be . One can now take the input time series from the previous section . Slightly tedious but basically simple calculus results in a critical value of for the input time series, where . For the following results the value of is always set to the critical value. In this situation one can test for convergence of two slightly different initial conditions and one can get a power law decay of the difference. However, setting up the amplitude of the input just a tiny bit higher is going to result in two diverging time series and . So: If the conditions of the ESN are chosen to be exactly at the critical point it is possible that a not trained input sequence very near to the trained input sequence can turn the ESN in a state where Jäger’s echo state condition is not fulfilled anymore, i.e. that Lyapunov exponent is positive for the given network in combination with these input sequences. In order to illustrate this difference numerical experiments (cf. Fig. 2 right side) have been done were both the network according to equation 7 and of eq. 8 receive input with a slightly higher or lower input amplitude, i.e. an input sequence is perceived, where is a constant factor and and in both cases is the expected input that produces the critical behavior. Here, the amplitude is used as an example as an arbitrary continuous parameter that defines properties of the input sequences. If is equal to one, the resulting input sequence for both, the example of eq. 7 and eq. 8, results in a critical dynamics with a Lyapunov exponent of . The difference between the two networks is that in the case of eq. 8 positive Lyapunov are possible for -factors larger than one whereas in the case of the proposed network for any input sequence the Lyapunov coefficient is smaller or equal to one. This means that the network of eq. 8 is not an ESN according to the definition given in sect. 2, while the proposed network is an ESN if the convergence condition of eq. 3 holds. Analytic calculus [11] shows that in the critical case the nature of the transfer function determines if a network is an ESN or not. Fig. 3 depicts the convergence process of two exemplary start values at the critical state and the one neuron network of eq. 7 and compares the results of the expected alternating input (s and s) with constant input of the same amplitude and an iid. random set of s and s. In the first case double log plots reveal that the vanishing follows a power law, i.e. forgetting is slow. Thus, we get

with a constant . The other types of input statistics result in faster forgetting. Here, every input value may be seen as an event that demands memory capacity. The result is effectively an exponential forgetting i.e.

with a constant . Which is the same result the one would also expect for all memory decay in under-critical networks. Exponential decay appears as a straight line in semi-logarithmic plots. The single neuron network simulations have been done by using double precision floating point variables, i.e. in 64 bits. Since the experimental setting in fig. 3 organizes the initial difference between the 2 networks in an identical way as the randomness of the following inputs (that are identical to both networks) one would expect that the differences vanish over the time of 64 iterations. So one expects that the difference between the 2 network vanishes roughly in about 64 iterations. Considering the results of fig. 3 one can see that indeed the difference vanishes in about 64 to 200 iterations.The fact, that the forgetting process is slower than 64 iterations may indicate that several variant input histories can result in the same identical reservoir state.

Figure 3: The graphs depict different versions of the same data. Each red curve is the forgetting curve of the initial difference between 2 networks if the input sequence is alternating between s and s. The orange curves depicts the forgetting curve for constant input with amplitude . The other curves show several iid. random sequences of s and

s with equal probability. The left plot is a log-log plot with a focus on the alternating and constant input. The red curve converges towards a straight line, which indicates an underlying power law of this data. The curve resulting from constant input shows large values even at later time steps. However, the convergence appears to be faster than a power law, hence the curve bends toward the bottom. Finally random input shows the fastest convergence. From the middle and right side graphs clearly indicate that all except for the alternating input show (roughly) an exponential decay. Both the middle and the right side plot show a scale down to

, which is about the limit of precision of double precision floating point numbers. Once a difference between two initial states reaches zero it is beyond the logarithmic scale and not plotted anymore. So, the curve ends at that iteration.

5 Discussion

ESNs can be tuned to the critical value on the spot. At the same it can be guaranteed that no input can push the network over the permissible limit. The setting of the ECPs leads to new insights into the network dynamics and relate those to information theory. If the next input is predictable, the next state of the network is going to hit one ECP exactly. One may interpret the resulting network in the way that predictable input is always directed to the ECPs and in this way prevented from consuming too much space (i.e. entropy) in the reservoir. Instead, deviations from predicted input materialize in the reservoir as distances to the ECPs. These deviations prevail than in a power-law fashion. This is true for both the present approach and the approach proposed in [10]. Different from [10] the approach here focuses on a adaptive transfer function. Overall there is very limited literature about adaptive transfer functions in neural networks (e.g. [16] ). With regard to reservoir computing investigations into adaptive transfer functions may be promising. In the present approach one target of the investigation was to find a way of training where the position of the transition point was unchanged, only the environment around it was transformed in the way that . This method in some sense changes the topology of the reservoir: By design, in every transition reservoirs loose information about previous inputs, however this information loss is not homogeneous and independent from the input time series. Rather it varies depending on features of the network, on the current input value and other parameters. Using the method of ECPs the reservoir transforms then into a magnifying glass around those predicted states, which allows the network to look deep into the past if the incidence of aberrations from the predicted values are rare. So, aberrations from the predicted states can leave traces in the reservoir for very long times – if they are rare. In this sense the input-driven network turns into an event-driven network, i.e. a system that reacts strongly on an unpredicted event in contrast to the everyday and usual input. Or, to put it in other words a lossy memory compression of a sliding window with an infinite but more and more lossy reproducibility of the far past. Acknowledgements. This manuscript has been posted at arxiv.org. The authors thanks MOST of Taiwan for financial support and O. Obst for all his help.

References

  • [1] Mantas Lukoševičius and Herbert Jäger. Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3):127–149, 2009.
  • [2] Benjamin Schrauwen, David Verstraeten, and Jan Van Campenhout. An overview of reservoir computing: theory, applications and implementations. In Proceedings of the 15th European symposium on artificial neural networks. Citeseer, 2007.
  • [3] Herbert Jäger. The “echo state” approach to analysing and training recurrent neural networks – with an erratum note. In GMD Report 148, GMD German National Research Insitute for Computer Science, 2010. http://www.gmd.de/People/Herbert.Jaeger/Publications.html.
  • [4] Herbert Jäger. Adaptive nonlinear system identification with echo state networks. In Proc. of NIPS 2002, 2003. AA14.
  • [5] Herbert Jäger, Wolfgang Maass, and Jose Principe. Special issue on echo state networks and liquid state machines. Neural Networks, 20(3):287–289, 2007.
  • [6] T. Natschläger, N. Bertschinger, and R. Legenstein. At the edge of chaos: Real-time computations and self-organized criticality in recurrent neural networks. In Advances in Neural Information Processing Systems 17, 2005.
  • [7] Márton Albert Hajnal and András Lörincz. Critical echo state networks. In S. Kollias et al. (Eds.): ICANN 2006, Part I, LNCS 4131, pages 658 – 667, 2006.
  • [8] Joschka Boedecker, Oliver Obst, Joseph Lizier, N. Mayer, and Minoru Asada. Information processing in echo state networks at the edge of chaos. Theory in Biosciences, 131:205–213, 2012.
  • [9] J. Beggs and D. Plenz. Neuronal avalanches in neocortical curcuits. J. Neurosci., 24(22):5216–5229, 2004.
  • [10] N. M. Mayer. Adaptive critical reservoirs with power law forgetting of unexpected input events. Neural Computation, 27:1102–1119, May 2015.
  • [11] N. M. Mayer. Critical echo state networks that anticipate input using adaptive transfer functions. URL, http://arxiv.org/abs/1606.03674, 2016.
  • [12] G Manjunath and Herbert Jaeger. Echo state property linked to an input: Exploring a fundamental characteristic of recurrent neural networks. Neural computation, 25(3):671–696, 2013.
  • [13] Gilles Wainrib and Mathieu N Galtier. A local echo state property through the largest lyapunov exponent. Neural Networks, 2016.
  • [14] N. M. Mayer and Matthew Browne. Self-prediction in echo state networks. In Proceedings of The First International Workshop on Biological Inspired Approaches to Advanced Information Technology (BioAdIt2004), Lausanne, 2004.
  • [15] N. M. Mayer and Minoru Asada. Is self-prediction a useful paradigm for echo state networks that are driven by robotic sensory input? In 20th Neural Information Procesing Systems Conference (NIPS2006): Workshop on Echo State Networks and Liquid State Machines H. Jaeger, W. Maass, Jose C. Principe (Organisers), December 2006.
  • [16] Liansheng Wang, Xucan Chen, Sikun Li, and Xun Cai. General adaptive transfer functions design for volume rendering by using neural networks. In Neural Information Processing, pages 661–670. Springer, 2006.