I Introduction
One approach to forecasting the state of a dynamical system starts by using whatever knowledge and understanding is available about the mechanisms governing the dynamics to construct a mathematical model of the system. Following that, one can use measurements of the system state to estimate initial conditions for the model that can then be integrated forward in time to produce forecasts. We refer to this approach as knowledgebased prediction. The accuracy of knowledgebased prediction is limited by any errors in the mathematical model. Another approach that has recently proven effective is to use machine learning to construct a model purely from extensive past measurements of the system state evolution (training data). Because the latter approach typically makes little or no use of mechanistic understanding, the amount of training data and the computational resources required can be prohibitive, especially when the system to be predicted is large and complex. The purpose of this paper is to propose, describe, and test a general framework for combining a knowledgebased approach with a machine learning approach to build a hybrid prediction scheme with significantly enhanced potential for performance and feasibility of implementation as compared to either an approximate knowledgebased model acting alone or a machine learning model acting alone. The results of our tests of our proposed hybrid scheme suggest that it can have wide applicability for forecasting in many areas of science and technology. We note that hybrids of machine learning with other approaches have previously been applied to a variety of other tasks, but here we consider the general problem of forecasting a dynamical system with an imperfect knowledgebased model, the form of whose imperfections is unknown. Examples of such other tasks addressed by machine learning hybrids include network anomaly detection
Shon and Moon (2007), credit rating Tsai and Chen (2010), and chemical process modeling Psichogios and Ungar (1992), among others.Another view motivating our hybrid approach is that, when trying to predict the evolution of a system, one might intuitively expect the best results when making appropriate use of all the available information about the system. Here we think of both the (perhaps imperfect) physical model and the past measurement data as being two types of system information which we wish to simultaneously and efficiently utilize. The hybrid technique proposed in this paper does this.
To illustrate the hybrid scheme, we focus on a particular type of machine learning known as ‘reservoir computing’ Jaeger (2001); Maass, Natschläger, and Markram (2002); Lukoševičius and Jaeger (2009), which has been previously applied to the prediction of low dimensional systems Jaeger and Haas (2004) and, more recently, to the prediction of large spatiotemporal chaotic systems Pathak et al. (2017, 2018)
. We emphasize that, while our illustration is for reservoir computing with a reservoir based on an artificial neural network, we view the results as a general test of the hybrid approach. As such, these results should be relevant to other versions of machine learning
Goodfellow, Bengio, and Courville (2016)(such as Long ShortTerm Memory networks
Hochreiter and Schmidhuber (1997)), as well as reservoir computers in which the reservoir is implemented by various physical means (e.g., electrooptical schemes Larger et al. (2012, 2017); Antonik, Haelterman, and Massar (2017) or Field Programmable Gate Arrays (Haynes et al., 2015)) A particularly dramatic example illustrating the effectiveness of the hybrid approach is shown in Figs. 7(d,e,f) in which, when acting alone, both the knowledgebased predictor and the reservoir machine learning predictor give fairly worthless results (prediction time of only a fraction of a Lyapunov time), but, when the same two systems are combined in the hybrid scheme, good predictions are obtained for a substantial duration of about Lyapunov times. (By a ‘Lyapunov time’ we mean the typical time required for an fold increase of the distance between two initially close chaotic orbits; see Sec. IV and V.)The rest of this paper is organized as follows. In Sec. II, we provide an overview of our methods for prediction by using a knowledgebased model and for prediction by exclusively using a reservoir computing model (henceforth referred to as the reservoironly model). We then describe the hybrid scheme that combines the knowledgebased model with the reservoironly model. In Sec. III
, we describe our specific implementation of the reservoir computing scheme and the proposed hybrid scheme using a recurrentneuralnetwork implementation of the reservoir computer. In Sec.
IV and V, we demonstrate our hybrid prediction approach using two examples, namely, the lowdimensional Lorenz system Lorenz (1963) and the high dimensional, spatiotemporal chaotic KuramotoSivashinsky system Kuramoto and Tsuzuki (1976); Sivashinsky (1977).Ii Prediction Methods
We consider a dynamical system for which there is available a time series of a set of measurable statedependent quantities, which we represent as the
dimensional vector
. As discussed earlier, we propose a hybrid scheme to make predictions of the dynamics of the system by combining an approximate knowledgebased prediction via an approximate model with a purely datadriven prediction scheme that uses machine learning. We will compare predictions made using our hybrid scheme with the predictions of the approximate knowledgebased model alone and predictions made by exclusively using the reservoir computing model.ii.1 KnowledgeBased Model
We obtain predictions from the approximate knowledgebased model acting alone assuming that the knowledgebased model is capable of forecasting for based on an initial condition and possibly recent values of for . For notational use in our hybrid scheme (Sec. II.3), we denote integration of the knowledgebased model forward in time by a time duration as,
(1) 
We emphasize that the knowledgebased onestepahead predictor is imperfect and may have substantial unwanted error. In our test examples in Secs. IV and V we consider prediction of continuoustime systems and take the prediction system time step to be small compared to the typical time scale over which the continuoustime system changes. We note that while a single prediction time step () is small, we are interested in predicting for a large number of time steps.
ii.2 ReservoirOnly Model
For the machine learning approach, we assume the knowledge of for times from to . This data will be used to train the machine learning model for the purpose of making predictions of for . In particular we use a reservoir computer, described as follows.
A reservoir computer (Fig. 1) is constructed with an artificial high dimensional dynamical system, known as the reservoir whose state is represented by the dimensional vector , . We note that ideally the forecasting accuracy of a reservoironly prediction model increases with , but that is typically limited by computational cost considerations. The reservoir is coupled to an input through an InputtoReservoir coupling which maps the dimensional input vector, , at time , to each of the reservoir state variables. The output is defined through a ReservoirtoOutput coupling , where is a large set of adjustable parameters. In the task of prediction of state variables of dynamical systems the reservoir computer is used in two different configurations. One of the configurations we call the ‘training’ phase, and the other one we called the ‘prediction’ phase. In the training phase, the reservoir is configured according to Fig. 1 with the switch in the position labeled ‘Training’. In this phase, the reservoir evolves from to according to the equation,
(2) 
where the nonlinear function and the (usually linear) function depend on the choice of the reservoir implementation. Next, we make a particular choice of the parameters such that the output function satisfies,
for . We achieve this by minimizing the error between and for using a suitable error metric and optimization algorithm on the adjustable parameter vector .
In the prediction phase, for the switch is placed in position labeled ‘Prediction’ indicated in Fig. 1. The reservoir now evolves autonomously with a feedback loop according to the equation,
(3) 
where, is taken as the prediction from this reservoironly approach. It has been shown Jaeger and Haas (2004) that this procedure can successfully generate a time series that approximates the true state for . Thus is our reservoirbased prediction of the evolution of . If, as assumed henceforth, the dynamical system being predicted is chaotic, the exponential divergence of initial conditions in the dynamical system implies that any prediction scheme will only be able to yield an accurate prediction for a limited amount of time.
ii.3 Hybrid Scheme
The hybrid approach we propose combines both the knowledgebased model and the reservoironly model. Our hybrid approach is outlined in the schematic diagram shown in Fig. 2.
As in the reservoironly model, the hybrid scheme has two phases, the training phase and the prediction phase. In the training phase (with the switch in position labeled ‘Training’ in Fig. 2), the training data from to is fed into both the knowledgebased predictor and the reservoir. At each time , the output of the knowledgebased predictor is the onestep ahead prediction . The reservoir evolves according to the equation
(4) 
for , where the (usually linear) function couples the reservoir network with the inputs to the reservoir, in this case and . As earlier, we modify a set of adjustable parameters in a predefined output function so that
(5) 
for , which is achieved by minimizing the error between the righthand side and the lefthand side of Eq. (5), as discussed earlier (Sec. II.2) for the reservoironly approach. Note that both the knowledgebased model and the reservoir feed into the output layer (Eq. (5) and Fig. 2) so that the training can be thought of as optimally deciding on how to weight the information from the knowledgebased and reservoir components.
For the prediction phase (the switch is placed in the position labeled ‘Prediction’ in Fig. 2) the feedback loop is closed allowing the system to evolve autonomously. The dynamics of the system will then be given by
(6) 
where , is the prediction of the prediction of the hybrid system.
Iii Implementation
In this section we provide details of our specific implementation and discuss the prediction performance metrics we use to assess and compare the various prediction schemes. Our implementation of the reservoir computer closely follows Ref. Jaeger and Haas (2004). Note that, in the reservoir training, no knowledge of the dynamics and details of the reservoir system is used (this contrasts with other machine learning techniques Goodfellow, Bengio, and Courville (2016)): only the training data is used (, , and, in the case of the hybrid, ). This feature implies that reservoir computers, as well as the reservoirbased hybrid are insensitive to the specific reservoir implementation. In this paper, our illustrative implementation of the reservoir computer uses an artificial neural network for the realization of the reservoir. We mention, however, that alternative implementation strategies such as utilizing nonlinear optical devices Larger et al. (2012); Antonik, Haelterman, and Massar (2017); Larger et al. (2017) and Field Programmable Gate Arrays Haynes et al. (2015) can also be used to construct the reservoir component of our hybrid scheme (Fig. 2) and offer potential advantages, particularly with respect to speed.
iii.1 ReservoirOnly and Hybrid Implementations
Here we consider that the highdimensional reservoir is implemented by a large, low degree ErdősRènyi network of
nonlinear, neuronlike units in which the network is described by an adjacency matrix
(we stress that the following implementations are somewhat arbitrary, and are intended as illustrating typical results that might be expected). The network is constructed to have an average degree denoted by , and the nonzero elements of, representing the edge weights in the network, are initially chosen independently from the uniform distribution over the interval
. All the edge weights in the network are then uniformly scaled via multiplication of the adjacency matrix by a constant factor to set the largest magnitude eigenvalue of the matrix to a quantity
, which is called the ‘spectral radius’ of . The state of the reservoir, given by the vector , consists of the components for where denotes the scalar state of the node in the network. When evaluating prediction based purely on a reservoir system alone, the reservoir is coupled to the dimensional input through a dimensional matrix , such that in Eq. (2) , and each row of the matrix has exactly one randomly chosen nonzero element. Each nonzero element of the matrix is independently chosen from the uniform distribution on the interval . We choose the hyperbolic tangent function for the form of the nonlinearity at the nodes, so that the specific training phase equation for our reservoir setup corresponding to Eq. (2) is(7) 
where the hyperbolic tangent applied on a vector is defined as the vector whose components are the hyperbolic tangent function acting on each element of the argument vector individually.
We choose the form of the output function to be , in which the output parameters (previously symbolically represented by ) will henceforth be take to be the elements of the matrix , and the vector is defined such that equals
for odd
, and equals for even (it was empirically found that this choice of works well for our examples in both Sec. IV and Sec. V, see also Lu et al. (2017); Pathak et al. (2018)). We run the reservoir for with the switch in Fig. 1 in the ‘Training’ position. We then minimize with respect to , where is now . Since depends linearly on the elements of, this minimization is a standard linear regression problem, and we use Tikhonov regularized linear regression
Tikhonov, Arsenin, and John (1977). We denote the regularization parameter in the regression by and employ a small positive value of to prevent over fitting of the training data.Once the output parameters (the matrix elements of ) are determined, we run the system in the configuration depicted in Fig. 1 with the switch in the ‘Prediction’ position according to the equations,
(8)  
(9) 
corresponding to Eq. (3). Here denotes the prediction of for made by the reservoironly model.
Next, we describe the implementation of the hybrid prediction scheme. The reservoir component of our hybrid scheme is implemented in the same fashion as in the reservoironly model given above. In the training phase for , when the switch in Fig. 2 is in the ‘Training’ position, the specific form of Eq. (4) used is given by
(10) 
As earlier, we choose the matrix (which is now dimensional) to have exactly one nonzero element in each row. The nonzero elements are independently chosen from the uniform distribution on the interval . Each nonzero element can be interpreted to correspond to a connection to a particular reservoir node. These nonzero elements are randomly chosen such that a fraction of the reservoir nodes are connected exclusively to the raw input and the remaining fraction are connected exclusively to the the output of the model based predictor .
Similar to the reservoironly case, we choose the form of the output function to be
(11) 
Where, as in the reservoironly case, now plays the role of . Again, as in the reservoironly case, is determined via Tikhonov regularized regression.
iii.2 Training Reusability
In the prediction phase, , chaos combined with a small initial condition error, , and imperfect reproduction of the true system dynamics by the prediction method lead to a roughly exponential increase of the prediction error as the prediction time increases. Eventually, the prediction error becomes unacceptably large. By choosing a value for the largest acceptable prediction error, one can define a “valid time” for a particular trial. As our examples in the following sections show, is typically much less than the necessary duration of the training data required for either reservoironly prediction or for prediction by our hybrid scheme. However, it is important to point out that the reservoir and hybrid schemes have the property of training reusability. That is, once the output parameters (or ) are obtained using the training data in , the same can be used over and over again for subsequent predictions, without retraining . For example, say that we now desire another prediction starting at some later time . In order to do this, the reservoir system (Fig. 1) or the hybrid system (Fig. 2) with the predetermined , is first run with the switch in the ‘Training’ position for a time, . This is done in order to resynchronize the reservoir to the dynamics of the true system, so that the time prediction system output, , is brought very close to the true value, , of the process to be predicted for (i.e., the reservoir state is resynchronized to the dynamics of the chaotic process that is to be predicted). Then, at time , the switch (Figs. 1 and 2) is moved to the ‘Prediction’ position, and the output or is taken as the prediction for . We find that with predetermined, the time required for resynchronization turns out to be very small compared to , which is in turn small compared to the training time .
iii.3 Assessments of Prediction Methods
We wish to compare the effectiveness of different prediction schemes (knowledgebased, reservoironly, or hybrid). As previously mentioned, for each independent prediction, we quantify the duration of accurate prediction with the corresponding “valid time”, denoted , defined as the elapsed time before the normalized error first exceeds some value , , , where
(14) 
and the symbol now stands for the prediction [either or as obtained by either of the three prediction methods (knowledgebased, reservoirbased, or hybrid)].
In what follows we use . We test all three methods on 20 disjoint time intervals of length in a long run of the true dynamical system. For each prediction method, we evaluate the valid time over many independent prediction trials. Further, for the reservoironly prediction and the hybrid schemes, we use 32 different random realizations of and , for each of which we separately determine the training output parameters ; then we predict on each of the 20 time intervals for each such random realization, taking advantage of training reusability (Sec. III.2
). Thus, there are a total of 640 different trials for the reservoironly and hybrid system methods, and 20 trials for the knowledgebased method. We use the median valid time across all such trials as a measure of the quality of prediction of the corresponding scheme, and the span between the first and third quartiles of the
values as a measure of variation in this metric of the prediction quality.Iv Lorenz system
The Lorenz system Lorenz (1963) is described by the equations,
(15)  
For our “true” dynamical system, we use , , and we generate a long trajectory with . For our knowledgebased predictor, we use an ‘imperfect’ version of the Lorenz equations to represent an approximate, imperfect model that might be encountered in a real life situation. Our imperfect model differs from the true Lorenz system given in Eq. (IV) only via a change in the system parameter in Eq. (IV) to . The error parameter is thus a dimensionless quantification of the discrepancy between our knowledgebased predictor and the ‘true’ Lorenz system. We emphasize that, although we simulate model error by a shift of the parameter , we view this to represent a general model error of unknown form. This is reflected by the fact that our reservoir and hybrid methods do not incorporate knowledge that the system error in our experiments results from an imperfect parameter value of a system with Lorenz form.
Next, for the reservoir computing component of the hybrid scheme, we construct a networkbased reservoir as discussed in Sec. II.2 for various reservoir sizes and with the parameters listed in Table 1.
Parameter  Value  Parameter  Value 

Figure 3 shows an illustrative example of one prediction trial using the hybrid method. The horizontal axis is the time in units of the Lyapunov time , where denotes the largest Lyapunov exponent of the system, Eqs. (IV). The vertical dashed lines in Fig. 3 indicate the valid time (Sec. III.3) at which (Eq. (14)) first reaches the value . The valid time determination for this example with and is illustrated in Fig. 4. Notice that we get low prediction error for about 10 Lyapunov times.
The red upper curve in Fig. 5 shows the dependence on reservoir size of results for the median valid time (in units of Lyapunov time, , and with ) of the predictions from a hybrid scheme using a reservoir system combined with our imperfect model with an error parameter of . The error bars span the first and third quartiles of our trials which are generated as described in Sec. III.3. The black middle curve in Fig. 5 shows the corresponding results for predictions using the reservoironly model. The blue lower curve in Fig. 5 shows the result for prediction using only the imperfect knowledgebased model (since this result does not depend on , the blue curve is horizontal and the error bars are the same at each value of ). Note that, even though the knowledgebased prediction alone is very bad, when used in the hybrid, it results in a large prediction improvement relative to the reservoironly prediction. Moreover, this improvement is seen for all values of the reservoir sizes tested. Note also that the valid time for the hybrid with a reservoir size of is comparable to the valid time for a reservoironly scheme at . This suggests that our hybrid method can substantially reduce reservoir computational expense even with a knowledgebased model that has low predictive power on its own.
Fig. 6 shows the dependence of prediction performance on the model error with the reservoir size held fixed at . For the wide range of the error we have tested, the hybrid performance is much better than either its knowledgebased component alone or reservoironly component. Figures 5 and 6, taken together, suggest the potential robustness of the utility of the hybrid approach.
V KuramotoSivashinsky equations
In this example, we test how well our hybrid method, using an inaccurate knowledgebased model combined with a relatively small reservoir, can predict systems that exhibit high dimensional spatiotemporal chaos. Specifically, we use simulated data from the onedimensional KuramotoSivashinsky (KS) equation for ,
(16) 
Our simulation calculates on a uniformly spaced grid with spatially periodic boundary conditions such that , with a periodicity length of , a grid size of grid points (giving a intergrid spacing of ), and a sampling time of . For these parameters we found that the maximum Lyapunov exponent, , is positive (), indicating that this system is chaotic. We define a vector of values at each grid point as the input to each of our predictors:
(17) 
For our approximate knowledgebased predictor, we use the same simulation method as the original KuramotoSivashinsky equations with an error parameter added to the coefficient of the second derivative term as follows,
(18) 
For sufficiently small , Eq. (18) corresponds to a very accurate knowledgebased model of the true KS system, which becomes less and less accurate as is increased.
Illustrations of our main result are shown in Figs. 7 and 8, where we use the parameters in Table 2. In the top panel of Fig. 7, we plot a computed solution of Eq. (16) which we regard as the true dynamics of a system to be predicted; the spatial coordinate is plotted vertically, the time in Lyapunov units () is plotted horizontally, and the value of is color coded with the most positive and most negative values indicated by red and blue, respectively. Below this top panel are six panels labeled (af) in which the color coded quantity is the prediction error of different predictions . In panels (a), (b) and (c), we consider a case (, ) where both the knowledgebased model (panel (a)) and the reservoironly predictor (panel (b)) are fairly accurate; panel (c) shows the hybrid prediction error. In panels (d), (e), and (f), we consider a different case (, ) where both the knowledgebased model (panel (d)) and the reservoironly predictor (panel (e)) are relatively inaccurate; panel (f) shows the hybrid prediction error. In our color coding, low prediction error corresponds to the green color. The vertical solid lines denote the valid times for this run with .
Parameter  Value  Parameter  Value 

We note from Figs. 7(a,b,c), that even when the knowledgebased model prediction is valid for about as long as the reservoironly prediction, our hybrid scheme can significantly outperform both its components. Additionally, as in our Lorenz system example (Fig. 6 for ) we see from Figs. 7(d,e,f) that in the parameter regimes where the KS reservoironly model and knowledgebased model both show very poor performance, the hybrid of these low performing methods can still predict for a longer time than a much more computationally expensive reservoironly implementation (Fig. 7(b)).
This latter remarkable result is reinforced by Fig. 8(a), which shows that even for very large error, , such that the model is totally ineffective, the hybrid of these two methods is able to predict for a significant amount of time using a relatively small reservoir. This implies that a nonviable model can be made viable via the addition of a reservoir component of modest size. Further Figs. 8(b,c) show that even if one has a model that can outperform the reservoir prediction, as is the case for for most reservoir sizes, one can still benefit from a reservoir using our hybrid technique.
Vi conclusions
In this paper we present a method for the prediction of chaotic dynamical systems that hybridizes reservoir computing and knowledgebased prediction. Our main results are:

Our hybrid technique consistently outperforms its component reservoironly or knowledgebased model prediction methods in the duration of its ability to accurately predict, for both the Lorenz system and the spatiotemporal chaotic KuramotoSivashinsky equations.

Our hybrid technique robustly yields improved performance even when the reservoironly predictor and the knowledgebased model are so flawed that they do not make accurate predictions on their own.

Even when the knowledgebased model used in the hybrid is significantly flawed, the hybrid technique can, at small reservoir sizes, make predictions comparable to those made by much larger reservoironly models, which can be used to save computational resources.

Both the hybrid scheme and the reservoironly model have the property of “training reusability” (Sec. III.2), meaning that once trained, they can make any number of subsequent predictions (without retraining each time) by preceding each such prediction with a short run in the training configuration (see Figs. 1 and 2) in order to resynchronize the reservoir dynamics with the dynamics to be predicted.
Vii Acknowledgment
This work was supported by ARO (W911NF1210101), NSF (PHY1461089) and DARPA.
References
 Shon and Moon (2007) T. Shon and J. Moon, “A hybrid machine learning approach to network anomaly detection,” Information Sciences 177, 3799–3821 (2007).
 Tsai and Chen (2010) C.F. Tsai and M.L. Chen, “Credit rating by hybrid machine learning techniques,” Applied soft computing 10, 374–380 (2010).
 Psichogios and Ungar (1992) D. C. Psichogios and L. H. Ungar, “A hybrid neural networkfirst principles approach to process modeling,” AIChE Journal 38, 1499–1511 (1992).
 Jaeger (2001) H. Jaeger, “The “echo state” approach to analysing and training recurrent neural networkswith an erratum note,” Bonn, Germany: German National Research Center for Information Technology GMD Technical Report 148, 13 (2001).
 Maass, Natschläger, and Markram (2002) W. Maass, T. Natschläger, and H. Markram, “Realtime computing without stable states: A new framework for neural computation based on perturbations,” Neural computation 14, 2531–2560 (2002).
 Lukoševičius and Jaeger (2009) M. Lukoševičius and H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Computer Science Review 3, 127–149 (2009).
 Jaeger and Haas (2004) H. Jaeger and H. Haas, “Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication,” science 304, 78–80 (2004).
 Pathak et al. (2017) J. Pathak, Z. Lu, B. R. Hunt, M. Girvan, and E. Ott, “Using machine learning to replicate chaotic attractors and calculate lyapunov exponents from data,” Chaos: An Interdisciplinary Journal of Nonlinear Science 27, 121102 (2017).
 Pathak et al. (2018) J. Pathak, B. Hunt, M. Girvan, Z. Lu, and E. Ott, “Modelfree prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach,” Phys. Rev. Lett. 120, 024102 (2018).
 Goodfellow, Bengio, and Courville (2016) I. Goodfellow, Y. Bengio, and A. Courville, Deep learning (MIT press, 2016).
 Hochreiter and Schmidhuber (1997) S. Hochreiter and J. Schmidhuber, “Long shortterm memory,” Neural computation 9, 1735–1780 (1997).
 Larger et al. (2012) L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutiérrez, L. Pesquera, C. R. Mirasso, and I. Fischer, “Photonic information processing beyond turing: an optoelectronic implementation of reservoir computing,” Optics express 20, 3241–3249 (2012).
 Larger et al. (2017) L. Larger, A. BaylónFuentes, R. Martinenghi, V. S. Udaltsov, Y. K. Chembo, and M. Jacquot, “Highspeed photonic reservoir computing using a timedelaybased architecture: Million words per second classification,” Physical Review X 7, 011015 (2017).
 Antonik, Haelterman, and Massar (2017) P. Antonik, M. Haelterman, and S. Massar, “Braininspired photonic signal processor for generating periodic patterns and emulating chaotic systems,” Physical Review Applied 7, 054014 (2017).
 Haynes et al. (2015) N. D. Haynes, M. C. Soriano, D. P. Rosin, I. Fischer, and D. J. Gauthier, “Reservoir computing with a single timedelay autonomous boolean node,” Physical Review E 91, 020801 (2015).
 Lorenz (1963) E. N. Lorenz, “Deterministic nonperiodic flow,” Journal of the atmospheric sciences 20, 130–141 (1963).
 Kuramoto and Tsuzuki (1976) Y. Kuramoto and T. Tsuzuki, “Persistent propagation of concentration waves in dissipative media far from thermal equilibrium,” Progress of theoretical physics 55, 356–369 (1976).
 Sivashinsky (1977) G. Sivashinsky, “Nonlinear analysis of hydrodynamic instability in laminar flames—i. derivation of basic equations,” Acta astronautica 4, 1177–1206 (1977).
 Lu et al. (2017) Z. Lu, J. Pathak, B. Hunt, M. Girvan, R. Brockett, and E. Ott, “Reservoir observers: Modelfree inference of unmeasured variables in chaotic systems,” Chaos: An Interdisciplinary Journal of Nonlinear Science 27, 041102 (2017).
 Tikhonov, Arsenin, and John (1977) N. Tikhonov, Andreĭ, V. I. Arsenin, and F. John, Solutions of illposed problems, Vol. 14 (Winston Washington, DC, 1977).
Comments
There are no comments yet.