1. Introduction
Even though recurrent neural networks seem to provide an efficient way to process data in biological organisms, artificial models of recurrent networks are currently considered to be more difficult to train than their feedforward counterparts (Pascanu et al., 2013)
. In feedforward networks, input data propagate through the network in the feedforward manner only, hence the data pass through each neuron only once. In the case of recurrent neural networks, the data might flow through the network back and forth for an extended time period, each neuron might process the data multiple times, and the data from different time points can be combined to build various time dependent relations. In other words, recurrent networks implicitly benefit from memory capability, that allows them to solve various nonMarkovian tasks
^{1}^{1}1In Markovian tasks the input in a single time point provides a complete state information necessary to solve the task, opposite to the nonMarkovian tasks, where a longer history of input values might be required. For instance, deducing whether a car has reached its known destination point, knowing the position of the car in each time step, is a simple Markovian task. However, if only the velocity of the car in each time step is known, the task is nonMarkovian, since the velocity has to be integrated over time to calculate the position. without being explicitly provided by additional inputs.Recurrent networks are harder to train, however, they provide a set of very desirable properties. To overcome the cumbersome training, Jaeger developed a new approach known as echo state networks (Jaeger, 2001). This model significantly speeds up training and avoids most of the training pitfalls at the cost of decreased adaptation ability. The key element in echo state networks is a large, randomly generated recurrent network. Echo state networks rely on the assumption that this large random network nonlinearly transforms the input into so many variations that extraction of useful information becomes simple.
There are, however, some restrictions to the random weights in order for the recurrent network to behave reasonably. When the weights are too large, the network’s output resembles a white noise and it is called to have a chaotic dynamics (Figure 0(c)). In the opposite case, where the weights are too small, the activity of the network tends to die out and it is called to have an ordered dynamics (Figure 0(a)). In neither of the two cases is it possible to extract anything useful out of the network. According to the papers by Bertschinger and Natschläger (Bertschinger and Natschläger, 2004) and Boedecker et al. (Boedecker et al., 2012), the best properties are provided when the recurrent network dynamics is on the very transition between order and chaos (Figure 0(b)). This particular transition was given the name edge of chaos (Langton, 1990).
2. Problem Specification
Our primary goal is to evolve the recurrent part of echo state networks using neuroevolution and analyze the computational power of such evolved networks and their relation to the edge of chaos. Both subjects, the computational power and the edge of chaos relation, will be compared with the corresponding properties of the original, pure random, echo state networks.
Echo state networks in combination with the edge of chaos suggest an idea of how biological networks might achieve such an unbeaten performance. Instead of training each neural synapse separately, the brain tissue might grow more or less randomly and still obtain great results by remaining on the edge of chaos. Unfortunately, the papers by
Bertschinger and Natschläger (Bertschinger and Natschläger, 2004) and Boedecker et al. (Boedecker et al., 2012), which evaluate the performance of echo state networks on the edge of chaos, only consider pure random networks where all pairs of neurons have the same probability of being connected. Such networks consist of a structure with no regularities, no repeating patterns and no locality dependencies. Such a model does not comply with the knowledge of biological neural tissue, that might have a higher degree of regularity
(Squire et al., 2013).To allow evolution of biologically plausible networks, we will use the HyperNEAT algorithm (Gauci and Stanley, 2007), which provides the means to build complex regular structures. We are interested whether the biologically plausible networks will perform comparably to their pure random counterparts and whether their performance will relate to the edge of chaos.
Before the main experiment, we will replicate the original results of Bertschinger and Natschläger (Bertschinger and Natschläger, 2004) and Boedecker et al. (Boedecker et al., 2012) who propose that computational power of echo state networks is maximized on the edge of chaos.
3. Methods
In this section, all the methods used in the experiments will be explained to a greater depth.
3.1. Echo State Networks
An echo state network, defined by Jaeger (Jaeger, 2001), consists of a recurrent network with weight matrix
, a vector of
input coefficients , and a vector of readout coefficients (Figure 2). The activations of the neurons in the recurrent network, the input value, and the output value in time are denoted by , and , respectively. The activations and the output value are calculated as follows:The recurrent network and the input coefficients are generated randomly and never change, the only part that is trained for the given problem are the readout coefficients. They are chosen so that the predicted sequence and the desired output sequence minimize their squared distance.
For the echo state network to work properly, the recurrent network cannot be completely random. Instead, it shall have a socalled echo state property (sometimes called fading memory). Informally, it means that the state of the netwok only depends on a finite history of its inputs. A more formal definition will not be provided, as it is equivalent to the definition of a network with ordered dynamics described in the next section (Bertschinger and Natschläger, 2004). More information about echo state networks and their training can be found in the original paper by Jaeger (Jaeger, 2001).
3.2. Chaotic and Ordered Dynamics
Let us briefly introduce chaotic and ordered dynamics. A system in which a sufficiently small perturbation of initial parameters disappears in a finite time is called to be ordered. A system in which a perturbation amplifies is called to be chaotic.
In this section, we will describe a measure of chaoticity called Lyapunov exponent (denoted by
) in the context of neural networks. Its rationale is to let a neural network run from two slightly perturbed initial states and measure the distance between the two network states from that moment on. If the two network states tend to converge, the system is in the ordered phase and
. If they diverge, the system is in the chaotic phase and . The edge of chaos lies right in the middle, where the two states tend to keep the same distance from each other and .The formal definition of Lyapunov exponent is following:
where is the distance of the two initially perturbed states in time and is the distance of the initial states, i.e., the size of the perturbation.
We will adopt the algorithm by Sprott (Sprott, 2015)
for numerical estimation of Lyapunov exponent. Let us explain the process.

[leftmargin=*]

A random sequence of 2000 values is generated to drive the neural network.

The network is run for 1000 time steps and its outputs are discarded. This action is performed for the network to stabilize.

After the 1000 steps, the network is duplicated. In the second network, the activation value of one of its neurons is perturbed by the value of (in this paper ).

Both the networks proceed one time step forward and the distance between their states is calculated. Formally, the distance will be denoted by , where and are the activations of the neurons in the first and the second network respectively and denotes the Euclidean distance. This distance is recorded for later use.

The state of the perturbed network is normalized to the initial distance of , i.e., . This action is performed because the activation value of a neuron usually has a limited range (e.g., for sigmoidal transfer function) and therefore, the distance between the states is also limited. This action ensures that the two states do not diverge close to this limit and also avoids numerical overflows. Figure 3 visualizes this operation.

Repeat steps 4 and 5 until the end of the input sequence.

Return to (3) and choose a different neuron to be perturbed.
This process is repeated for each neuron in the network. The final Lyapunov exponent is the average logarithm of the distance of the two trajectories, averaged over all neurons:
where denotes the distance of the two states in time and perturbed neuron and denotes the arithmetic average over time and all neurons.
The theory behind order and chaos is, of course, much more extensive and far beyond the scope of this paper. For more information, please refer to (Sprott, 2003).
4. Information Theory Measures
To gain an additional insight on what is happening inside a recurrent network, we will present two measures from the informationtheoretical framework defined by Lizier et al. (Lizier et al., 2014). The first measure is called active information storage (AIS) and it denotes the average mutual information between the past states of a random process and its next state. Its definition is following:
where denotes the semiinfinite past of the process and denotes the probability function. In the context of neural networks, the active information storage of a neuron measures how much does the neuron’s history influence its future state. Selflinks or transfers of information to other neurons and back are also considered.
The second measure is called transfer entropy (TE). It always regards two random processes, a source and a destination, between whom is the transfer entropy measured. It denotes the amount of information from the source which determines the value of the destination and was not already provided in the destination’s history. In other words, it is the mutual information between the current state of the source process and the next state of the destination process conditioned on the history of the destination process:
In the context of neural networks, the transfer entropy measures how much does the current state of the source neuron influence the next state of the destination neuron. This measure was first introduced by Schreiber (Schreiber, 2000) without the limit of , which was suggested later by Lizier et al. (Lizier et al., 2008).
The aforementioned measures represent a universal tool for analyzing random processes. Please refer to (Lizier et al., 2014) for a unified overview.
5. Neuroevolution
In this section, we will present two genetic algorithms specialized solely on neural networks. The first one is called
NeuroEvolution of Augmenting Topologies (NEAT) and the second one is its extension called Hypercubebased NeuroEvolution of Augmenting Topologies (HyperNEAT).The NEAT algorithm was introduced by Stanley and Miikkulainen in 2002 and it is still widely used. It has proven to be an efficient method to simultaneously evolve both, the weights of a neural network and the network’s topology. Please refer to the original paper by Stanley and Miikkulainen (Stanley and Miikkulainen, 2002) for a thorough, yet succinct explanation.
The original NEAT algorithm evolves each connection independently, which is sufficient for smallscale neural networks. However, in the case of larger networks (i.e., hundreds of neurons), the number of connections is just too large for the evolution to succeed in a reasonable time. To overcome this issue, Gauci and Stanley exploited the idea of indirect genetic encoding and proposed an algorithm called HyperNEAT. This algorithm uses the original NEAT to evolve a population of neural networks called compositional pattern producing networks (CPPNs). The CPPNs are later presented with a userdefined set of neurons on a Cartesian plane, called substrate, and they are queried for the weight and the presence of each potential connection between all pairs of neurons. The substrate may be much larger than the CPPN itself and the CPPNs may thus compactly represent complex structures with repeating patterns and geometric regularities. For a more thorough description, please refer to the original paper by Gauci and Stanley (Gauci and Stanley, 2007).
6. Performance Measures
To measure the computational power of a neural network, we have selected the following three benchmarks: Memory Capacity (MC) (Jaeger, 2002), Nonlinear AutoRegressive Moving Average (NARMA) (Boedecker et al., 2012) and a novel measure called Negative Ratio (NR). They should assess the network’s ability to store the data into a shortterm memory (MC), operate with the memory (NARMA) and analyze it (NR).
The memory capacity (MC) task evaluates the maximum duration for which is the network capable of remembering its inputs. The evaluated network is driven by a single input sequence and predicts an infinite number of output sequences. The desired value of the th output sequence is an exact copy of the input sequence delayed by time steps. For each of the output sequences, the delay memory capacity is calculated as the squared Pearson correlation coefficient between the predicted output and the desired output:
where denotes the squared sample covariance,
denotes the sample variance,
is the th desired output sequence, and is the th output sequence predicted by the network. The total memory capacity value is the sum of these delayed memory capacities:During our experiments, we found this measure to be numerically unstable. When one of the output sequences has a very low variance (e.g., if the network always predicts a value close to 1.0), the
value is unpredictable and can go up to infinity. This may represent a problem especially in the case of evolutionary algorithms. Whenever an evolutionary algorithm detects such an instability of the fitness function, the instability is quickly exploited and the evolution converges to an undesired result. For this particular reason, we propose a numerically stable alternative of the memory capacity task called
memory mean squared error (MMSE).The evaluation of the MMSE is very similar to the evaluation of the MC. However, this time the network predicts only a finite number of output sequences. The desired value of the th output sequence is, again, the input sequence delayed by time steps. The final MMSE value is the normalized root mean squared error of the predicted sequences with respect to the corresponding desired output sequences, as defined in the following formula:
where and are the values of the th desired sequence and the th predicted sequence in time , respectively. is the input sequence, and denotes the arithmetic average over time and all output sequences.
In Nonlinear AutoRegressive Moving Average (NARMA) task, the network is driven by a single input sequence and the task is to predict the following nonlinear combination of the past 30 inputs:
where and are the values of the desired output sequence and the input sequence in time , respectively. The performance of this task is measured using the normalized root mean squared error:
where is the value of the predicted sequence in time and denotes the arithmetic average over time.
In the negative ratio (NR) task, the network shall estimate the ratio of negative numbers in the last inputs. NR is the only of the proposed measures, where the exact input values are not important and instead, the values shall be conditioned on a specific property. Formally, the desired output sequence is defined as following:
where and are the values of the input sequence and the desired output sequence in time , respectively. is equal to iff and otherwise. The performance of this task is measured using the normalized root mean squared error:
where is the value of the predicted sequence in time .
7. Random Echo State Networks
In this section, the results of the related works by Bertschinger and Natschläger (Bertschinger and Natschläger, 2004) and Boedecker et al. (Boedecker et al., 2012) are replicated. Both papers suggest that the computational power of randomly generated echo state networks is maximized on the transition between order and chaos. Furthermore, Boedecker et al. suggest that there is a peak of active information storage and transfer entropy right on the edge of chaos.
7.1. Experimental Settings
The experiment is conducted by generating a large set of random echo state networks of different parameters. All the evaluated echo state networks have 151 neurons (including the input neuron) and use hyperbolic tangent transfer function with no bias. The weights from the input neuron to all the other neurons are drawn uniformly from the range of
. All the other weights are drawn from a normal distribution with zero mean and variance
. The values are chosen so that is in the range of increasing its value by . For each , we generate and evaluate 10 random networks. The input sequences for the MC task are drawn uniformly from the range of . For the NARMA task, the range is . The length of the input sequences is 3000 time steps. The first 1000 time steps are used to stabilize the network, i.e., the network is driven by the sequence, but its outputs are discarded. The next 1000 time steps are used to train the linear coefficients and the last 1000 time steps are used to evaluate the network’s performance. The MC is evaluated only up to the delay limit of 300 time steps.To evaluate AIS and TE, an input sequence of length 3000 time steps is generated uniformly from the range of . The first 1000 steps are used to stabilize the network and the remaining 2000 time steps are used to calculate the measures. A history of size 2 is used for both the measures, instead of infinity. The AIS is averaged over all the neurons in the network and the TE is averaged over all the nonzero connections.
The experiment consumed approximately 120 CPU days.
7.2. Results
The memory capacity (MC) against is plotted in Figure 3(a). The MC increases until the edge of chaos, where it reaches its maximum value. After the performance peak, there is a sharp drop, which suggests that in the chaotic regime, even if it is very close to the transition, no data survive the surrounding noise for long.
Our results of the MC task actually differ from the original paper by Boedecker et al. (Boedecker et al., 2012), where the MC only rarely reached a value higher than 10. On the other hand, our results are in accordance with the paper by Barančok and Farkaš (Barančok and Farkaš, 2014) who investigated the effect of structured input sequences (i.e., sequences which are not purely random) on the MC task.
The NARMA error against is plotted in Figure 3(b). The error is decreasing until the edge of chaos, where it reaches its minima. After the transition to the chaotic regime, the error sharply raises to the same value as if the network’s output would be absolutely random. This observation supports the idea that in the chaotic regime, the network output resembles a random white noise.
We have evaluated also the MMSE and NR tasks and the results are very similar. It seems that for all the evaluated tasks, the performance is indeed maximized on the edge of chaos. A rigorous reason for this behaviour remains an open question.
Let us analyze the effects of randomly removing the majority of the connections between the neurons. According to our simulations, such a restriction of the connectivity does not significantly influence the network’s performance. Figure 3(c) demonstrates this phenomena on the MC task and a similar pattern appears on the other evaluated performance tasks as well. It should be noted that restricting the number of connections while keeping the same weights makes the network dynamics more ordered (as stated by, e.g., Bertschinger and Natschläger (Bertschinger and Natschläger, 2004)).
Boedecker et al. (Boedecker et al., 2012) provide an additional insight on what is happening within a network on the edge of chaos. In the paper, the AIS and TE are measured, relative to . The results are replicated in Figure 4(a) and Figure 4(b). Both of the measured entropies slowly decrease through the ordered regime until the edge of chaos, where they form a high sharp peak. In the chaotic regime, their value drops and remains at its lowest level. There does not seem to be a direct correlation between the network’s performance and the AIS or the TE. Nevertheless, the high peak leads to the belief that there is an unexpected irregularity on the transition between the order and chaos.
In the original paper by Boedecker et al. (Boedecker et al., 2012), the AIS and TE were plotted separately from the performance measures. Let us, instead, draw the entropies and the performance measures to a single plot focused tightly on the edge of chaos (Figure 4(c)). After a careful analysis of the figure, it can be seen that the best performance on the NARMA task ends immediately before the peaks of AIS and TE. When the entropies increase, the NARMA error increases with them. The reason may be that when the entropies reach a critical threshold, the network performance is impaired.
8. Evolved Echo State Networks
In this section, we are going to evolve the recurrent part of echo state networks via HyperNEAT algorithm. To avoid overfitting to one of the given tasks, the evolution in our experiment is instructed to maximize the performance on the MMSE and the NARMA tasks simultaneously. We believe that these two tasks require contradictory properties of the evolved network, which may reduce overfitting by balancing between memory capacity and computational performance. Furthermore, the NR task is hidden to the evolution and it is instead used to validate the performance of the evolved network on tasks never seen before. The MC task is not evaluated at all because of its numerical instability discussed in Section 6. The MMSE task is used to asses the network’s shortterm memory instead.
8.1. Experimental Settings
Five runs of evolution are executed, each of which evolves a population of 150 CPPNs for 2000 generations. The substrate used in our experiment is a “golden angle spiral”, in which the coordinate of the th neuron is defined as and , where is the number of neurons, is the “golden angle” equal to , and is the rotation angle (i.e., the phase) of the whole spiral. The rotation angle is generated randomly for each of the five evolutionary runs.
The size of the substrate is 151 neurons. A single input neuron, whose activation always corresponds to the current input value, is placed in the centre of the substrate, on coordinate . When using a CPPN to build a connection between two neurons, the CPPN is fed by the distance between the neurons in addition to their coordinates. The networks generated on the substrate use hyperbolic tangent transfer function with no bias. Neurons disconnected from the input are not considered for , AIS and TE measures.
The fitness function of a CPPN is defined as , where is the network generated by the CPPN on the aforementioned substrate. The fitness value is evaluated three times and the results are averaged. For the first 15 generations of a life of a genome, its fitness is boosted by 10%. The number of species is kept between 5 and 10. The difference between two genomes is defined as , where is the number of genes in the larger genome, is the number of nonmatching genes, and is the average weight difference of matching genes. The difference threshold for creating a new species begins at and may be dynamically increased or decreased in each generation by .
Only the fittest 25% genomes of a species are allowed to reproduce. The elitism is set to 5%. If a species does not improve its fitness for 20 generations, it is forbidden to reproduce. To select the best genomes, tournament selection of size four is used on the top 25% of the genomes in the species. Overall crossover probability is 70%. There is only a 0.01% chance of interspecies mating.
Overall mutation probability is 15%. If the mutation occurs, the chance of adding a new neuron is 1%, the chance of adding a new connection is 8%, the chance of removing a connection is 2%, and each weight has a 90% chance of being perturbed by a uniformly distributed random value from the range of
. The probability of mutating the bias of a neuron is 1% and the mutation process is the same as for the weight mutation.Additionally, if the mutation occurs, the transfer function of each neuron might mutate as well. The probability of mutating the transfer function is 3% and in such a case, the function is replaced by one of the following functions: hyperbolic tangent, sine, signed step, signed gaussian, and linear transfer function.
We have evaluated a few more different configurations and the results seemed to be insensitive to small parameter changes. The experiment consumed approximately 80 CPU days.
8.2. Results
The evolution of the fitness value () is plotted in Figure 5(a). All the evolutionary runs converged stably to a similar value. The improving evolution of MMSE is depicted in Figure 5(b) and NARMA manifested similar results (not depicted). Since MMSE and NARMA form the fitness function, it is no surprise they consistently improve throughout the evolution. However, also the NR task (Figure 5(c)), that was not optimized directly, slightly improved and stabilized, suggesting that the evolved network is not overfitted to the fitness function.
MMSE  NARMA  NR  Lyap. exp.  

evolved  0.0094 ( 0.0009 std)  0.4119 ( 0.0297 std)  0.5403 ( 0.0688 std)  0.2652 ( 0.0000 std) 
random  0.2625 ( 0.0098 std)  0.4150 ( 0.0171 std)  0.5728 ( 0.0730 std)  0.0280 ( 0.0000 std) 
pvalue  0.2066 
The comparison of the best evolved network and the best random echo state network on 50 full evaluation cycles (stabilize, train, evaluate). The pvalues for the hypothesis that the evolved network outperforms the random network are calculated using onetailed onesample ttest.
To compare the evolved networks with the orginal random echo state networks, the best representatives of both categories are selected. The selection criteria is the average of ten evaluations of the fitness function. The two best representatives are statistically compared in Table 1. The performance on the NARMA task is similar for both the random and the evolved network. However, on the MMSE task, the evolved network outperforms the random network by more than one order of magnitude. On the NR task, which has not been explicitly optimized by any of the two approaches, the evolved network also performs significantly better.
Now, we will address the question whether the evolution has any relation to the edge of chaos. Figure 7 plots NARMA versus and the histogram of ’s encountered during any of the five runs of the evolution. Only a very few networks have strongly ordered dynamics and their performance is rather poor. The vast majority of evolved networks is concentrated on the ordered side of the edge of chaos, in the range of of . It is clear that the evolution avoided chaotic regime at all costs. The results might suggest that the ordered side of the edge of chaos is a part of the search space favoured by evolutionary algorithms. One explanation may be, that if the evolution is completely unable to solve the given task, it may generate a network more or less randomly. If this random network is on the edge of chaos, its performance is still better compared to the performance of an overly ordered or a chaotic network.
8.3. Topology
To better understand the main difference between the best pure random echo state network and the best evolved network, their visualization is provided in Figure 8. The random network has more than 23 thousand neural connections, in contrast with the evolved network that has only 383. The evolved network only connects neurons which are spatially close and furthermore, the network has no intersecting connections. It should be noted that the evolved network has only a single connection heading out of the input neuron and this connection has a low weight compared to the other connections in the network. The evolution of the best network is depicted in Figure 9. We have also visualized all the other evolutionary runs and found out that the most successful networks share very similar visual features.
8.4. Locally Connected Echo State Networks
A natural question emerges whether the topological features of the most successful evolved networks could be used to improve the pure random fully connected echo state networks as well. We will attempt to answer this question by restricting the pure random networks to only build local connections between the neurons. We will use the same neural substrate as in the case of evolved networks and limit the length of the connections to 0.25. Additionally, only a single connection heading out of the input neuron is allowed.
According to the plot of the NARMA task in Figure 10, the performance of the locally connected networks is again maximized on the ordered side of the edge of chaos. The other tasks manifested similar results.
MMSE  NARMA  NR  Lyap. exp.  

local  0.0104 ( 0.0003 std)  0.4173 ( 0.0171 std)  0.5325 ( 0.0653 std)  0.1451 ( 0.0000 std) 
full  0.2625 ( 0.0098 std)  0.4150 ( 0.0171 std)  0.5728 ( 0.0730 std)  0.0280 ( 0.0000 std) 
pvalue  0.1318 
To compare locally connected and fully connected echo state networks, we again select the best candidates of both categories according to the fitness function (). The best locally connected network is visualized in Figure 11 and its comparison with fully connected networks is provided in Table 2. On the MMSE task, the locally connected network outperforms the fully connected network by one order of magnitude. On the NR task, the difference is less noticeable, yet still statistically significant. On the NARMA task, the difference is not statistically significant. The corresponding results and pvalues are provided in Table 2. To summarize the results, the performance of the best locally connected network is close to the performance of the best evolved network. Locally connected networks provide a convenient alternative to neuroevolution when the available time and resources are limited.
9. Conclusions
Echo state networks represent a fast and powerful approach to time series analysis and prediction. However, it is difficult to choose the right set of parameters for this approach to maximize its computational performance. To simplify the parameter selection, it was stated that the performance of echo state networks is maximized when the network’s dynamics is on the transition between order and chaos. We have confirmed this statement in a comprehensive set of experiments. A rigorous reason for this behaviour remains an open question.
Even though the echo state networks were designed as a model of biological brain, their fully connected topology does not appear to be biologically plausible. We have addressed this issue via evolutionary algorithms and created a network with a more “organic” layout. The evolved network turned out to significantly outperform the fully connected echo state networks. Furthermore, we have demonstrated that the evolution favoured the ordered side of the edge of chaos and avoided chaotic and overly ordered networks.
We have transferred the properties of the most successful evolved networks back to the original echo state networks and introduced an approach called locally connected echo state networks. This model has also proven to significantly outperform the fully connected networks and provides a convenient alternative to neuroevolution when the computational resources are limited.
10. Future Work
For the comparison with other methods from the literature, both the proposed models need to be evaluated using a well known benchmark. An example of such a benchmark are the LSTM tasks defined by Hochreiter and Schmidhuber (Hochreiter and Schmidhuber, 1997) on which the fully connected echo state networks have already been evaluated by Jaeger (Jaeger, 2012). Moreover, the proposed models may be evaluated on a set of realworld problems, such as speech prediction and music prediction (similarly to Martens and Sutskever (Martens and Sutskever, 2011)).
Locally connected networks have a low number of connections with a regular structure. This opens new perspectives for an efficient implementation using massively parallel operations. Such an implementation may allow for significantly larger networks while keeping the same computational costs.
Acknowledgements.
This research was supported by Charles University GA UK project number 1578717 and SVV project number 260 453. Access to computing and storage facilities owned by parties and projects contributing to the National Grid Infrastructure MetaCentrum, provided under the programme “Projects of Large Research, Development, and Innovations Infrastructures” (CESNET LM2015042), is greatly appreciated. We thank the authors of MultiNEAT (C++/Python), JIDT (Lizier, 2014) (Java), and SciPy (Python) software packages for sharing their hard work under permissive open source licences.References
 (1)
 Barančok and Farkaš (2014) Peter Barančok and Igor Farkaš. 2014. Memory Capacity of InputDriven Echo State Networks at the Edge of Chaos. In International Conference on Artificial Neural Networks. Springer International Publishing, 41–48.
 Beggs (2008) John M Beggs. 2008. The Criticality Hypothesis: How Local Cortical Networks Might Optimize Information Processing. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 366, 1864 (2008), 329–343.
 Bertschinger and Natschläger (2004) Nils Bertschinger and Thomas Natschläger. 2004. RealTime Computation at the Edge of Chaos in Recurrent Neural Networks. Neural computation 16, 7 (2004), 1413–1436.
 Boedecker et al. (2012) Joschka Boedecker, Oliver Obst, Joseph T. Lizier, N. Michael Mayer, and Minoru Asada. 2012. Information Processing in Echo State Networks at the Edge of Chaos. Theory in Biosciences 131, 3 (2012), 205–213.
 Gauci and Stanley (2007) Jason Gauci and Kenneth Stanley. 2007. Generating LargeScale Neural Networks Through Discovering Geometric Regularities. In Proceedings of the 9th annual conference on Genetic and evolutionary computation. ACM, 997–1004.
 Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long ShortTerm Memory. Neural computation 9, 8 (1997), 1735–1780.
 Jaeger (2001) Herbert Jaeger. 2001. The “Echo State” Approach to Analysing and Training Recurrent Neural Networks. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report 148 (2001).
 Jaeger (2002) Herbert Jaeger. 2002. Short Term Memory in Echo State Networks. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report 152 (2002).
 Jaeger (2012) Herbert Jaeger. 2012. Long ShortTerm Memory in Echo State Networks: Details of a Simulation Study. Technical Report No. 27. Jacobs University.
 Kauffman (1993) Stuart A Kauffman. 1993. The Origins of Order: SelfOrganization and Selection in Evolution. Oxford University Press.
 Langton (1990) Chris G Langton. 1990. Computation at the Edge of Chaos: Phase Transitions and Emergent Computation. Physica D: Nonlinear Phenomena 42, 13 (1990), 12–37.
 Legenstein and Maass (2007) Robert Legenstein and Wolfgang Maass. 2007. Edge of Chaos and Prediction of Computational Performance for Neural Circuit Models. Neural Networks 20, 3 (2007), 323–334.
 Lizier (2014) Joseph T. Lizier. 2014. JIDT: An InformationTheoretic Toolkit for Studying the Dynamics of Complex Systems. Frontiers in Robotics and AI 1, 11 (2014).
 Lizier et al. (2008) Joseph T. Lizier, Mikhail Prokopenko, and Albert Y. Zomaya. 2008. Local Information Transfer as a Spatiotemporal Filter for Complex Systems. Physical Review E 77, 2 (2008), 026110.
 Lizier et al. (2014) Joseph T. Lizier, Mikhail Prokopenko, and Albert Y. Zomaya. 2014. A Framework for the Local Information Dynamics of Distributed Computation in Complex Systems. In Guided SelfOrganization: Inception. Springer Berlin Heidelberg, 115–158.

Martens and
Sutskever (2011)
James Martens and Ilya
Sutskever. 2011.
Learning Recurrent Neural Networks with
HessianFree Optimization. In
28th International Conference on Machine Learning (ICML11)
. 1033–1040.  Pascanu et al. (2013) Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the Difficulty of Training Recurrent Neural Networks. (2013), 1310–1318.
 Schreiber (2000) Thomas Schreiber. 2000. Measuring Information Transfer. Physical Review Letters 85, 2 (2000), 461–464.
 Sprott (2003) Julien C. Sprott. 2003. Chaos and TimeSeries Analysis. Oxford University Press.
 Sprott (2015) Julien C. Sprott. 2015. Numerical Calculation of Largest Lyapunov Exponent. (2015). http://sprott.physics.wisc.edu/chaos/lyapexp.htm Accessed online: 20160525.
 Squire et al. (2013) Larry Squire, Darwin Berg, Floyd E. Bloom, Sascha Du Lac, Anirvan Ghosh, and Nicholas C. Spitzer. 2013. Fundamental Neuroscience (4th ed.). Academic Press.
 Stanley and Miikkulainen (2002) Kenneth O. Stanley and Risto Miikkulainen. 2002. Evolving Neural Networks Through Augmenting Topologies. Evolutionary Computation 10, 2 (2002), 99–127.
Comments
There are no comments yet.