Improving Delay Based Reservoir Computing via Eigenvalue Analysis

09/16/2020 ∙ by Felix Köster, et al. ∙ Berlin Institute of Technology (Technische Universität Berlin) 0

We analyze the reservoir computation capability of the Lang-Kobayashi system by comparing the numerically computed recall capabilities and the eigenvalue spectrum. We show that these two quantities are deeply connected, and thus the reservoir computing performance is predictable by analyzing the eigenvalue spectrum. Our results suggest that any dynamical system used as a reservoir can be analyzed in this way as long as the reservoir perturbations are sufficiently small. Optimal performance is found for a system with the eigenvalues having real parts close to zero and off-resonant imaginary parts.



There are no comments yet.


page 6

page 7

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Reservoir computing is a novel approach for time-dependent tasks in machine learning. First introduced by Jaeger

[JAE01] and inspired by the human brain [MAA02], it utilizes the inherent computational capabilities of dynamical systems. Very recently the universal approximation property has also been shown for a wide range of reservoir computers, which solidifies the concept as a broad applicable scheme [GON20].

Hardware setups have shown the feasibility and wide range of realizations [FER03, ANT16, DOC09], while theoretical and numerical analysis show interesting advancements [GAL18a, GAL19] and pinpoint to easily implementable realizations [ROE18a, GOL20]. Different applications have been demonstrated [BAU15, KEU17, SCA16, ARG17, AMI19, PAT18, PAT18a, CUN19]. Since speed is of essence in computation, optoelectronic [LAR12, PAQ12] and optical setups [BRU13a, VIN15, NGU17, ROE18a, ROE20] are frequently studied, which additionally come with the benefit of low energy consumption.

A new and sophisticated approach to the reservoir computing scheme was introduced by Appeltant et al. in [APP11], where a single dynamical node under the influence of external feedback utilizes a time-multiplexed reservoir. The spatially extended network structure of classical reservoirs is no longer needed with this scheme, which reduces the complexity in reservoir hardware in exchange for processing speed. A schematic sketch is shown in Fig. 1. Realizations with a single delayed reservoir [ORT17a, DIO18, BRU18a, CHE19c, HOU18, SUG20] give a first glimpse over the potential of this idea for, e.g., time-series predictions [BUE17, KUR18], equalization tasks on nonlinearly distorted signals [ARG20], and fast word recognition [LAR17]. A general analysis, introduced by Dambre et al. [DAM12], was also used to quantify the general and task-independent computational capabilities of semiconductor lasers [HAR19]. For a general overview, we refer to [BRU19, SAN17a, TAN18a].

A lot of research was already invested in order to develop a deeper understanding of reservoir computing systems, however, effective measures that allow to predict the performance are still missing. In this paper we want to fill this gap by providing a scheme that allows to predict general trends of the performance using the eigenvalue spectra. As a reservoir, we chose a laser that is subjected to optical self-feedback. We use the Lang-Kobayashi system, which is an established model for a semiconductor laser with delayed external feedback. We calculate the total memory capacity as well as the linear and nonlinear contributions using the method derived in [DAM12] and compare the results with the computed eigenvalue spectrum of the system. We discover a clear connection between memory capacity and eigenvalue spectrum. In particular, the highest linear memory capacity corresponds to the spectrum where a large number of eigenvalues are close to criticality (with small negative real parts) and non-resonant (with imaginary parts not-resonant to the input timescale).

The paper is structured as follows. First, we give an overview of the methods used for calculating the memory capacity and the eigenvalue spectrum in Sec. II. After that, we present our results and discuss the impact of the eigenvalues on the performance and different nonlinear recall contributions first for a reservoir formed by a solitary laser and then by a laser with external cavity.

Ii Methods

The reservoir computing scheme employs the idea of a dynamical reservoir, which projects input information into a high dimensional phase space. The nonlinear response of the reservoir is then used by a linear readout to approximate a specific task depending on the input. Often the reservoir consists of many nodes with relatively simple dynamics (for example, -function [JAE01]) in which the input enters via a weighted matrix. Afterward, the response is read out and linearly combined to generate an output. The idea is to minimize the Euclidean distance between the generated output and the target. This approach is particularly resourceful for time-dependent tasks, because the dynamical system which forms the reservoir has memory and thus acts as a memory kernel.

The modified approach introduced by [APP11] uses a single node with delay as a reservoir, in which the output dimensions are distributed over time. A mask is used to feed the input into the system, which produces a high dimensional response. These responses are saved over time and used for the linear readout approximation. A sketch of the setup is shown in Fig. 1. In the following, we will give a short overview of the quantities and notations used in this paper. We also refer to our previous works [KOE20a, STE20], where a detailed explanation of how the reservoir setup is operated and task-independent memory capacities are computed is given.

Time multiplexing
Virtual node
Linear combination
Input via Pump
Fig. 1: Scheme of time-multiplexed reservoir computing with a laser.

Ii-a Time-Multiplexed Reservoir Computing

Let us briefly remind the main ingredients of the time-multiplexed reservoir computing scheme [APP11, KOE20a, STE20]

. An input vector

enters the system componentwise at times , . The time between two inputs is called the clock cycle and describes the period length in which one input is applied to the system. Inside each interval of one clock cycle , a -periodic mask function is applied on the inputs (see Fig. 1). The mask is piecewise-constant on intervals, each of length corresponding to virtual nodes. The values of the mask function play the same role as the input weights in spatially extended reservoirs, with the difference that the input weights are now distributed over time.

The system responses are collected in the state matrix , where is the dimension of the measured system’s state. More specifically, the elements of the state matrix are with , and , where is the state of the dynamical element of the reservoir at time , e.g., a variable of the delay system in simulations, or laser intensity in an experimental realization.

A linear combination of the state matrix is given by , where is a vector of weights. Such a combination is trained to find a least square approximation to some target vector

where is the Euclidean norm, and is a Tikhonov regularization parameter. A solution to this problem is known to satify


when is invertible. In the case of our Lang-Kobayashi model, since the physical system is intrinsically noisy, we used the state noise regularization [JAE01, Jaeger2007] and set .

To quantify the system’s performance, we use the normalized root mean square error (NRMSE) between the approximation and the target



is the variance of the target values


Ii-B Memory Capacity

Dambre et al. have shown in [DAM12] that the computational capability of a reservoir system can be quantified via an orthonormal set of basis functions on a sequence of inputs. Here we give a recap of the used quantities introduced in [KOE20a]. In particular, the capacity to fulfill a certain task is given by


The capacity equals if and the reservoir computer computes pefectly the task; if it can not compute it at all, and inbetween and if it is partially capable to fulfill the task. In Sec. App. A, we explain how Eq. (3) follows from the corresponding expression in [DAM12]. Further, following Dambre et. al. [DAM12], we use finite products of normalized Legendre polynomials as a basis of the Hilbert space of all possible transformations (thus tasks with targets ) on an input sequence

. As inputs into the system, we use uniformly distributed random numbers

, which are independent and identically drawn in . This yields uncorrelated inputs and thus uncorrelated memory capacities. After feeding the input sequence of random numbers into the system, it yields a reservoir response . Formally, the memory capacity (Eq. (3)) is defined for an infinitely long sequence . To approximate it numerically, we use .

In order to describe a task, the target vector is defined as


where is a sequence of degrees such that the Legendre polynomial of degree is applied to the input . The product of all such polynomials is used to generate the task (target vector ). The collection of all tasks (4) for any possible degree sequence is the Hilbert space of all possible transformations [DAM12].

Further, to define the linear and nonlinear memory capacities, one uses special tasks, for which the sum of the degrees is constant


Clearly, there are many such possible tasks for all sequences with . The memory capacity of degree is defined as the sum of the capacities computed using Eq. (3) for all tasks (5) of degree :


The well known linear memory capacity corresponds to . The total memory capacity is then given by the memory capacities of all degrees .


It was shown in [DAM12] that is limited by the readout-dimension , which equals the number of virtual nodes . An intuitive explanation is the following. The linear readout of the reservoir computing scheme can be considered a linear combination of the columns of the state matrix . Thus the amount of dimensions this basis can approximate is given by the number of linearly independent readouts. If the systems states are linearly independent, it can at most approximate different dimensions, which is in our case different tasks constructed from Eq. (4). A more rigorous explanation is given by Dambre et. al. in [DAM12].

Ii-C Narma10

In addition to memory capacities, we evaluate the normalized root mean square error (NRMSE) of the NARMA10 task. NARMA10 is an often used benchmark test that combines linear and nonlinear memory transformations. It is given by the following iterative formula


Here, is an iteratively given number and is an independent and identically drawn uniformly distributed random number in . The reservoir is fed with the random numbers and has to predict the value of .

Ii-D Lang-Kobayashi model

We use the Lang-Kobayashi laser as a reservoir. This is a model applicable for Class B lasers with external feedback operating with low feedback strength, where Class B refers to the definition of laser class systems from [ARE84]. The Lang-Kobayashi equations have been studied widely, modeling successfully semiconductor lasers [ALS96, HEI99b] exhibiting complex dynamics and bifurcation scenarios [ERN95a, ROT07, HEI99b]. The equations of motion are given by [LAN80b]


The parameters scaling was chosen as in [YAN10] with a modification to allow for the information input. The system time is normalized to the photon lifetime. Here, is the complex electric field, is the charge carrier inversion, and

describes spontaneous emission modeled by Gaussian white noise.

is a dimensionless pump rate, the input strength of the information fed into the system via eletric injection, is the masking function, is the input, the amplitude-phase coupling, is the feedback strength, the feedback phase, the delay time and the noise amplitude. is the time scale ratio, i.e. if , photons are much slower than electrons and the laser is effectively described by the complex electric field equation and thus is a Class A laser equation with external feedback.

Ii-E Calculating Eigenvalue Spectrum

The goal of this paper is to find a relation between the nonlinear memory recallability and the eigenvalue spectrum. The latter can be computed with much less numerical effort and could then be used to predict good parameter ranges for our reservoir. To compute the eigenvalue spectrum, we used two methods: the first method is an analytical approximation in the long delay limit [LIC11], while the second relies on numerical computation with the DDE-biftool software package [ENG02, SIE14a, JAN10].

To begin with, we give a short overview of the first method from [LIC11], which provides an approximation of the spectrum of long-delay systems. In [YAN10], it was applied to the Lang-Kobayashi system. As delay-based reservoir computing is mostly used with a long delay compared to the local dynamics, this is a valid approximation that gives a general tool to analyze the reservoirs of such type.

The characteristic equation for the eigenvalues is obtained through the linearization around a steady state , and it reads as


with some constant matrices and , and

is the identity matrix. For large

, its solutions can be decomposed into two parts, in which one scales as , also called the pseudocontinous spectrum, and a strongly unstable spectrum with the scaling with . The strongly unstable spectrum is absent for reservoir computing applications, since, otherwise, the reservoir’s state is strongly unstable, and the echo state (or fading memory) property [JAE01] is lost to a large extend. Hence, we focus on the pseudocontinous spectrum, which can be obtained by introducing the Ansatz


where and are two new real variables. Subsituting Eq. (12) into (11) one gets in the leading order


Equation (13) is a polynomial with respect to . If are solutions of this polynomial, then


and are the rescaled real parts of the eigenvalues from the pseudocontinuous spectrum. More exactly, the curves are approximated with the eigenvalues for large .

In the case of the Lang-Kobayashi system, there are two solutions for the real parts of the pseudocontinous Lyapunov spectrum, see the derivation in [YAN10],


where , and is the constant intensity, with the corresponding inversion at the external cavity mode (ECM). ECMs are the solutions of the Lang-Kobayashi system of the form with constant and , which play the role of the equilibria. Due to the symmetry of the system, each of these solutions can be transformed into an equilibrium with the corresponding characteristic equation (11). The Lang-Kobayahi system possesses many ECMs, however, for the case , we consider the ECM with , which is the most stable [YAN10].

In order to approximate the imaginary parts of the pseudocontinuous spectrum, we consider the argument of (14) and obtain


where is the imaginary part of a -th eigenvalue on the -th branch. For the purpose of this paper, we need an approximation of the eigenvalues around the origin. As one can simply show using (17), for large , these eigenvalues (their imaginary parts) can be approximated as


as soon as , see also [YAN05, YAN14a]

for more detailed estimations. In the case of the Lang-Kobayashi system, the roots

are real, hence, we have either or . This leads to


with for , and for . Hence, all imaginary parts are integer multiples of . In particular, for any , ( for , respectively), the product is proportional to an integer number of . This kind of resonance occurs for all considered eigenvalues (independent of ), and it plays an important role in the linear memory loss of the reservoir, which is discussed in Sec. III-B below.

The second method for computing the eigenvalue spectrum is based on the DDE-biftool [ENG02, SIE14a], which is a path-continuation package for Matlab capable of numerically computing the eigenvalues. In our case, we compute the first 100 eigenvalues with the highest real parts to compare these with results from the memory capacities.

We also consider the case of no feedback . This yields a solitary semiconductor laser system that can be tuned from being an effecitevly 1-dimensonal problem (Class A-like for ) to a 2-dimensional problem (Class B-like for ). We will use a linearization and numerical evaluation of the eigenvalue problem. Even though the laser system is 3-dimensional, it possesses the symmetry allowing to reduce the dimension by one.

Ii-F Simulation description

Simulations have been performed in C with standard libraries, except for linear algebra calculations, which were done via the linear algebra library ”Armadillo” [SAN16]. A Runge-Kutta 4th order method was applied to integrate the delay-differential equation given by Eqs. (9) and (10) numerically, with an integration step in time units of the photon lifetime. The noise strength is in all simulations. After simulating the system without reservoir inputs to let transients decay, a buffer time of 100000 inputs was applied (this is excluded from the training process). In the training process, 250000 inputs were used to have sufficient statistics (). Afterward, the memory capacities were calculated, whereby a testing phase is not necessary. All possible combinations of the Legendre polynomials up to degree and 500 input steps into the past were considered (). Capacities below were excluded because of finite statistics. For calculating the matrix inverse, the Moore–Penrose pseudoinverse from the C++ linear algebra library ”Armadillo” was used. In the case of the NARMA10 task, 25000 inputs for training and testing were used. For the piecewise-constant -periodic masking function independent and identically distrbuted random numbers between were used.

For all simulations, the input strength was fixed to . The small input strength was used to guarantee the linear answers of the reservoir and, hence, the relevance of the eigenvalue analysis.

Ii-G Geometrical intuition

In this paper we will use two quantities and to approximate the memory capacity properties of the reservoir computer. For these two values we would like to give a geometrical intution, shown Fig. 2.

The first value we call the relative angular distance between two inputs, where denotes the imaginary part of the eigenvalue . Here is a critical eigenvalue, i.e. one having it’s real part close to . It geometrically describes the angular distance between two distance vectors and of the system’s state and at two instances in time seperated by one clock cycle interval . If this relative angular distance is a multiple of the responses tend to overlap, reducing the seperability of the inputs, thus degrading the reservoir computer performance.

The second quantity describes the distance reduction between two perturbed states, where denotes the real part of the eigenvalue . describes the contraction of the system’s state towards a new fixpoint due to a new reservoir input. To distinguish two responses and for two different inputs and , the distance (see Fig. 2) between two responses should be large enough. On the other hand, if the reaction of the system is very fast, i.e. very negative eigenvalues, the system has a high echo state property and thus low memory capacities for any inputs longer than a few (in the worst case even longer than one) steps into the past. If the remaining information of the input n-th steps back is degrading very fast (very negative eigenvalues), the systems capability to recall is lowered, and at some point reaches the level of the system noise. The distance reduction gives a good estimation for both of these properties.

Fig. 2: Sketch of the system response in phase space to a small input during the clock cycle . The trajectory moves from a state to a new state (dotted black line). is a fixpoint of the system due to the new reservoir input . The red vectors , indicate the distances from this fixpoint for the two instances in time. The distance reduction describes the relation of the magnitudes of the two vectors. The purple arrow describes the angular distance covered in one clock cycle interval . For simplicity, we excluded trajectory responses for different virtual nodes .

In this paper we will show that both quantities together pinpoint to well performing reservoir computer setups.

Fig. 3: (a) Linear , quadratic , cubic , and total memory capacities plotted over the logarithmic lifetime scale ratio . (b) The real and imaginary parts of the eigenvalues showing the transition from class A to class B system. (c) The angular distance between two inputs taken modulo . Results shown are for the semiconductor laser system with , , and and a logarithmic scan for .

Iii Results

The following section is structured as follows. First, we will discuss a Lang-Kobayashi laser with , i.e., a class B laser system, as a reservoir to simplify and depict general concepts. Afterward, we will activate the delay and look at the full Lang-Kobayashi system as a reservoir computer.

Iii-a Class B Reservoir

We first consider a solitary semiconductor laser system as a reservoir. One has to think of the virtual nodes not to be located on the delay line, but rather as time separated readouts of the system state, that are used in a linear combination. We set in Eq. (9)–(10) and use 10 virtual nodes (. For the considered parameter values and and without input and noise (, ), the system’s solution converges to a single stable ECM, for which we compute the two eigenvalues. Note that the threshold pump current for the solitary laser is at , while changes with according to . The two eigenvalues are plotted in Fig. 3(b) as a function of , which gives from left to right the transition from Class A to Class B laser.

To compare the two eigenvalues with the recall capability of the laser, we plot the computed linear, quadratic, cubic, and total memory capacities in Fig. 3(a). The memory capacities do not change significantly for where the system corresponds to a class A laser with an adiabatic approximation of the charge carriers. For these parameter values, as one can see from the real parts of the eigenvalues, one eigendirection is considerably faster than the other, and thus can be ignored. At

, the transition from a Class A laser to a class B laser appears, whose steady state solution is a focus. The additional degree of freedom of the charge carrier dynamics leads to an increase of the total memory capacity from about 5 to the theoretical maximum of 10.

Fig. 3(c) shows the angular distance taken modulo , which is based on the rotation of a small perturbation vector in the 2-dimensional phase space during the evolution over the time-interval (see Fig. 2). The discontinuities of in Fig. 3(c) (indicated with vertical dashed purple lines in Fig. 3(a)) correspond to resonances, i.e. integer numbers of half-a-circle rotations. Comparing the memory capacity at these points in the class B regime, one observes dips in the linear memory capacity and slight changes in the higher-order memories. This effect is pronounced if at the same time real parts of the eigenvalues are close to 0. Since the degradation of the linear memory coincides with the discontinuities in , it can be linked to an overlapping of the systems responses, and to a decreasing linear separability of the output.

For a larger picture of the resonance effects at (), a 2-dimensional parameter scan was done as a function of the timescale ratio and the clock cycle (shown in Fig. 4). The linear, quadratic, cubic, and total memory capacities are color-coded in panel (a-d). Bright regions in (a-d) correspond to high memory capacities, while dark blue to low memory capacities. The black dashed line shows the scan from Fig. 3. Purple solid lines show the parameter values where . The influence of the angular distance is most prominent in Fig. 4(b), where dips are visible in the linear memory capacity. Its influence on the higher-order memory capacity is also detectable, but harder to describe, as both quadratic and cubic memories either decrease or increase depending on the resonance line.
The solid red lines denote parameter values, where the distance reduction for the two eigenvalues of the solitary laser system. The two red arrows indicate the direction in the parameter space for decreasing and thus decreasing . The distance reduction shows a decrease in the memory capacities for a decrease in . This rises from the fact that lower correspond to faster eigendirections and thus faster echo state properties.
Combining the information about the two quantities and and comparing it with the memory capacity, we can pinpoint to well performaing reservoir computers for the class B and class A laser system. Namely, the linear memory capacity has larger values in the absence of resonances and for values of closer to 1. We now want to expand this knowledge to the case of a laser with external feedback.

Fig. 4: 2-dimensional parameter scan in the plane of clock cycle and logarithmic life time scale ratio showing the total, linear, quadratic, and cubic memory capacities as a color code in panel (a)-(d). The purple and two red solid lines show the parameter values where , , and , respectively. are the two eigenvalues of the Class B system shown in Fig. 3. The dashed black line indicates the parameter scan used in Fig. 3 Other Parameters as in Fig. 3.

Iii-B Lang-Kobayashi system

Fig. 5: 2-dimensional parameter scan in the plane of clock cycle and delay with , , , . Color coded is (a) total, (b) linear, and (c) quadratic memory capacities.(d): Average angular distance given by Eq. (20) for the first 100 eigenvalues. Parameter values where are shown by solid purple lines in panels (a)-(c).

We now expand our results to the infinite-dimensional phase space of a semiconductor system with delay, i.e., the Lang-Kobayashi system. In [PAQ12, ROE18a, ROE20, KOE20a, STE20] it was shown that resonances between and often decrease memory capacity and thus reservoir computing performance. Here we look at this phenomenon from another point of view, namely, as a resonance between and the imaginary parts of the eigenvalues. We use the resonance property described in Sec. II-E: for certain resonant values of , the product is proportional to an integer number of for all critical eigenvalues simultaneously. We computed the first 100 eigenvalues using DDE-biftool for the Lang-Kobayashi system. By superimposing all , where is the index of the -th eigenvalue, we evaluate the resonance effects of the strongest eigendirections by computing the average angular distance


and compare the results with the linear, quadratic, and total memory capacities.

The comparison of the memory capacities and is shown in Fig. 5, where a 2-d parameter scan is plotted in the parameter plane of the delay time and the clock cycle . Bright regions in (a-c) correspond to high memory capacities, while dark blue to low memory capacities. Panel (d) shows the results for for the first 100 eigenvalues. Values close to or indicate parameters where all leading eigendirections possess resonant eigenvalues, i.e., , and perform an integer of half circle rotations during one input time . The solid purple lines in Fig. 5(a-c) indicate the resonant values . A match with the low memory capacities especially for the linear memory is clear. Our results support the fact that the clock cycle should be chosen to be off resonant of the delay time . The eigenvalue analysis gives an additional explanation and intuition for why this is the case. Taking into account the resonance effect and our results from [KOE20a], we set the delay time for all following simulations.

Fig. 6: Linear, quadratic, cubic, and total memory capacities are shown as orange ,green, red, and blue lines as a function of the lifetime scale ratio . The sum of the distance reductions of the first 100 eigenvalues is plotted as a dashed black line. The increase of coincides with the linear memory capacity. Parameters are , , , , .

As we have seen in Sec. III-A for a 2-dimensional reservoir, the reservoir performance decreases when the real part of the eigenvalues becomes strongly negative. In such a case, the reservoir ”forgets” the input too fast. Here we extend this idea to the case of the infinite-dimensional reservoir.

As long as the perturbation from the information fed into the system is small enough, one can think of all eigenvalues and their corresponding eigendirections as the available phase space of the reservoir computer. Thus, a higher phase space volume can lead to a more promising reservoir computer. We introduce the average of the distance reduction by


It describes the average distance reduction of the slowest eigendirections. Since only a finite number of complex eigenvalues lie to the right of a line parallel to the imaginary axis, all eigendirections except a finite number are strongly contracting, i.e., possess strongly negative real parts [HAL93]. This implies the possibility of considering only a finite number of eigenvalues in Eq. (21).

Figure 6 depicts the memory capacities and the distance reduction as a function of the timescale ratio , or in other words, the evolution of the memory capacities along the transition from a Class A to a Class B laser system with delayed feedback. Similarly to the case without feedback in Fig. 3, the memory capacity stays about constant for , and increases when the additional dimensions become available by the reservoir for . The increase of coincides with the increase of the linear memory. The higher orders show a similar trend, but are, in general, more involved and should be investigated more deeply. Thus, the knowledge of the eigenvalues provides a qualitative prediction of the linear memory capacity.

To give a broader overview, we perform a 2-dimensional parameter scan along the feedback strength and pump (shown in Fig. 7) and plot the linear, quadratic, and total memories as a color-code. Bright regions correspond to high memory capacities, and dark to low memory capacities. Additionally, in Fig. 7(d), the sum of the average distance reduction is color-coded within the same 2-dimensional parameter plane spanned by and . Comparing the three memory capacities scans with , we can see a close relationship between them. Thus, is a very good indicator for choosing well-performing reservoir computers. This saves a lot of computational efforts, as the eigenvalues can be computed in a fraction of the time needed to compute the memory capacities.

Fig. 7: 2-dimensional parameter scan in the plane of feedback strength and pump . Colorcoded are the linear (a), quadratic (b), total memory (c) capacities, and the (d) sum for the first 100 eigenvalues . The two crosses indicate the parametervalues used in Fig. 9. Used parameters are , and , .

To illustrate possible configurations of eigenvalues and their connection to , we chose two different parameter setups in Fig. 8: (a) and (b), with the other parameters fixed. The two parameter setups are marked as black crosses in Fig. 7, which correspond to parameters close to and well above threshold. The first parameter set (Fig. 8(i)) corresponds to an eigenvalue spectrum for an operation point close above the threshold with a low power output. Here the laser system possesses many eigenvalues with real parts close to 0, thus it has many slowly contracting eigendirections which means is closer to 1. Calculating for the first parameter set yields . The second parameter set (Fig. 8(ii)) corresponds to a laser that is operated high above threshold. This laser has less slowly contracting eigendirection, i.e. is closer to 0. Calculating for the second parameter set yields .

Fig. 8: Pseudocontinous Lyapunov spectra for two different parameter sets given by Eq. 15 (bright) and Eq. 16 (dark) plotted for (i) and (ii) . , , and . The value of corresponds to a solitary laser operating between Class A and Class B. For pump values slightly above threshold (i), the Lyapunov spectrum has more eigenvalues with real parts close to .

Now we use the insights gained from the distance reduction and from the angular distance and test the reservoir computer performances for the two parameter sets from Fig. 8(i) and Fig. 8(ii), marked as black crosses in Fig. 7. The performance is quantified by evaluating both the memory capacity and the prediction error (NRMSE) for the NARMA10 task shown in Fig. 9(b) and Fig. 9(a).

On the horizontal axis, we change the number of virtual nodes , i.e., we increase the number of readout dimensions, which should, naively thinking, increase the performance of the reservoir. While we do this, we keep the distance between the virtual nodes the same for 5 different cases of from up to , shown in black and red with decreasing brightness for the not-optimized (Fig. 8(i)) and optimized (Fig. 8(ii)) point respectively. Increasing the virtual node distance should reduce the linear dependency of the nodes, as the time between two responses is increased. This is obviously dependent on the reaction time of the system, which is also given by the eigenvalues of the system. Thus the influence of increasing on the slowly contracting eigendirections (Fig. 8(i)) is pronounced compared to the one with fast eigendirection (Fig. 8(ii)). The increase of also effectively increases the clock cycle and thus the delay time . We want to emphasize, that this does not alter the generell trend of (Fig. 8(i)) having many slowly contracting eigendirections compared to (Fig. 8(ii)).

The results indicate that even though the number of virtual nodes increases, the NARMA10 error for the case with the low distance reduction (ii) does not go below . On the other hand the case with the high , our optimal case (i), reaches very small errors below , a factor 3 better than the low distance reduction (ii) case. We also want to emphasize that the simulation was done for a high noise value of . Simulating the system without any noise NARMA10 errors (NRMSE) of down to were reached. We conclude that a high distance reduction is very beneficial for the performance.

As a dashed black line in Fig. 9

(a) we additionally show the minimum reached by a linear regression without a reservoir. Every reservoir setup with results below this line has higher performance and thus can be considered an improvement on the NARMA10 task. We included this here to emphasize the reduction of the NARMA10 error (NRMSE) by the eigenvalue analysis, for which an improvement of about a factor 4 is reached to the linear regression without reservoir.

The total memory capacities for the two cases (i) and (ii) are shown in Fig. 9(b). We can see the same trend: the memory capacity reaches a limit for the case with low , whereas the improved case with the highest increases in its memory capacity further for higher . The results suggest that operation points with high distance reduction (solid lines in Fig. 9) pinpoint to well-performing reservoir computers.

Fig. 9: Computation error NARMA10 NRMSE (a) and the total memory capacity (b) for the two parameter values: (dashed lines) and (solid lines), see also crosses in Fig. 7. Other parameters: , , and . On the axis the number of virtual nodes is shown. Different brightness correpond to different distances between the virtual nodes. The dased black line shows the minimum reached with a linear regression without a reservoir.

Iv Conclusion

We have shown that the eigenvalue spectrum analysis of the Lang-Kobayashi system is capable of predicting good reservoir computing setups. Because of the available analytical and numerical tools for the description of the eigenvalue spectrum, such analysis can be readily applied for different dynamical systems, which are used as reservoirs with operating points close to an equilibrium.

As to the relation between the eigenvalue spectrum and the performance of the delay-based reservoir computing, the central message of this paper is twofold. First, the eigenvalues must be off-resonant, where the resonance condition is given in terms of the imaginary parts of the eigenvalues . Namely, should be away from values of . Importantly, such resonances appear for all critical eigenvalues at almost the same parameter values, due to general properties of the spectrum of delay systems with large delays [LIC11]. Therefore, such an off-resonant condition plays an important role even when the reservoir’s effective dimensionality is high.

The second conclusion is that, for optimal performance, the spectrum must be close to criticality. This closeness is measured by the real part of the eigenvalue spectrum , which should be close to zero and negative. In this paper, we propose a measure for such closeness, which is given by eq. (21).

Appendix A Memory capacity expression (3)

We show how the expression for the memory capacity from [DAM12] can be rewritten in the form of Eq. (3). From Dambre. et. al. [DAM12], the capacity to approximate a target data is given by


Here is the i-th readout of the system responses for the -th input-target pair, is the average over all input-output paris and is the inverse of . One can insert the average over all input-output pairs yielding


In the denominator can be substituted with the square norm of the target vector . The first term is the -th system response to the -th input-output pair (-th column and -th row) multiplied with the -th target. Summing over all input-output pairs N, this is the same as the -th entry of the matrix product


The same reasoning applies to yielding in


is just the transposed case of , thus . Summing over all responses and is equivalent to the matrix product


with which we have reached Eq. (3)



The authors thank Florian Stelzer and Mirko Goldmann for fruitfull discussions. This study was funded by the ”Deutsche Forschungsgemeinschaft” (DFG) in the framework of SFB910 and project 411803875.