The computation of invariant manifolds of dynamical systems is very important for a series of system-level tasks, particularly for the bifurcation analysis and control. For example, the detection of stable manifolds of saddle points allows the identification of the boundary between different basins of attraction, while the intersection of stable and unstable manifolds most-often leads to complex dynamical behaviour such as chaotic dynamics Yagasaki1994; Feudel2005. Their computation is also central to the control of nonlinear systems and especially in the control of chaos Ott1990; Schiff1994; Yagasaki1994; Flakamp2017
. However, their computation is not trivial: even for relatively simple low-dimensional ODEs, their analytical derivation is most of the times an overwhelming difficult task. Thus, one has to resort to their numerical approximation. However, this task is not easy; at the beginning of ’90s only one-dimensional global invariant manifolds of vector fields could be computed. Guckenheimer & WorfolkGuckenheimer1993 proposed an algorithm for converging on the stable manifold of saddles based on geodesics emanating from the saddle by iteratively rescaling the radial part of the vector field on the submanifold. Johnson et al. (1997) Johnson1997
introduced a numerical scheme to reconstruct two-dimensional stable and unstable manifolds of saddles. The proposed method starts with the creation of a ring of points on the local-linear eigenspace and successively creates circles of points that are then connected by a triangular mesh. The appropriate points are selected through time integration so that the velocity of the vector field is similar in aN arc-length sense for all trajectories. Krauskopf & Osinga (1999)Krauskopf1999
developed a numerical method based on geodesics; the manifold is evolved iteratively by hyperplanes perpendicular to a previous detected geodesic circle. Krauskopf et al. (2005)KRAUSKOPF2005 addressed a numerical method for the approximation of two-dimensional stable and unstable manifolds which incorporates the solution of a boundary value problem; the method performs a continuation of a family of trajectories possessing the same arc-length. For a survey of methods for the numerical computation of stable and unstable manifolds see also Krauskopf et al. (2005) KRAUSKOPF2005. In the above methods, the stable manifold is computed as the unstable manifold of the inverse map, i.e. by following the flow of the vector field backward in time England2004. Thus, an explicit knowledge of the vector field and its inverse is required which however is not always available. England et al. (2004) England2004 presented an algorithm for computing one-dimensional stable manifolds for planar maps when an explicit expression for the inverse map is not available and/or even the map is not invertible. Triandaf et al. (2003) Triandaf2003 proposed a procedure for approximating stable and unstable manifolds given only experimental data based on time-delay embeddings of a properly selected data set of initial conditions.
Another approach to compute invariant manifolds, the so-called parametrization method has been introduced by Cabre et al. Cab1; Cab2; Cabr3. This is a numerical-assisted approach based on functional analysis tools for deriving analytical expressions of the semi-local invariant manifolds. This involves the expansion of the invariant manifold as series and the construction of a system of homological equations for the coefficients of the series. Based on this approach, Haro et al. (2016) Haro2016 addressed a numerical approach for the computation of the coefficients of high order power series expansions of parametrizations of two-dimensional invariant manifolds. Breden et al. (2016) Breden2016 employed the parametrization method to compute stable and unstable manifolds of vectors fields. For the implementation of the method it is assumed that the vector field is explicitly available in a closed form.
However, for many complex systems of contemporary interest, the equations that can describe adequately the dynamics at the macroscopic-continuum scale are not explicitly available in the form of ODEs or PDEs in a closed form. Take for example the case where the laws that govern the dynamics of the interactions between the units that constitute the system may be known in the form of e.g. molecular dynamics, Brownian dynamics, agent-based modeling, Monte Carlo etc., but a “good” macroscopic description is not available in a closed form. For this kind of problems the lack of a macroscopic description in a closed form constitutes a stumbling block in our ability to systematically analyse, design and control the emergent dynamics. Two ways are traditionally used to study the emergent behaviour of such microscopic dynamical models. On the one hand, there is the simple temporal simulation. An ensemble of many initial conditions would be set up; a large enough number of ensemble realizations would be created for each initial condition; some of the parameters of the model would probably have to be modified and finally the statistics of the detailed dynamics of the system would be monitored for a long time to investigate the coarse-grained behaviour. However, this “simple” temporal simulation is most of the times inappropriate for the systematic bifurcation analysis, optimization and control of the emergent behaviour. On the other hand there is the statistical-mechanics/assisted approach where one tries to analytically find closures, i.e. the relations for the moments of the detailed microscopic distribution that would allow the derivation of evolution equations at the macrosocpic/emergent level. For example, for Monte Carlo simulations (these processes are typically Markovian) a Master Equation can be derived from which evolution equations are obtained for a few moments of the underlying probability distribution. However, these equations usually involve higher-order moments whose evolution dynamics are functions of higher order moments. This lead to an infinite hierarchy of evolution equations. Thus at some level these higher order moments have to be expressed as functions of the lower-order ones in order to close the system of equations. However, the assumptions that underlie these “closures” introduce certain qualtitative and quantitative biases in the analysis of the “actual” system as represented by the best available microsocpic simulator (see for example inReppas2012 a comparative analysis between various closures for a microscopic model and a discussion about the biases that are introduced).
The Equation-free approach Makeev2002; Kevrekidis2003; Siettos2003; Kevrekidis2004, a multiscale numerical-assisted framework, allows the establishment of the link between traditional continuum numerical analysis and microscopic/ stochastic simulation of complex/multiscale systems. The Equation-Free approach allows the systematic numerical analysis of the coarse-grained macrosocpic dynamics bypassing the derivation of “closures” in an explicit analytically form. The method identifies “on-demand” the quantities required for performing numerical analysis at the continuum level, such as coarse-grained Jacobians and Hessians; these quantities are obtained by appropriately initialized runs of the microsocpic simulators, which are treated as black boxes maps. Regarding the computation of coare-grained invariant manifolds, Gear and Kevrekidis GearKevrekidis introduced a method for the convergence on the coarse-grained slow manifolds of legacy simulators by requiring that the change in the “fast” variables (i.e. the variables that are quickly “slaved” to the variables that parametrize the slow manifold) is zero. In another paper, Gear et al. GearKaper computed coarse-grained slow manifolds by restricting the derivatives of the “fast” variables to zero. Zagaris et al. (2009) Zagaris2009 performed a systematic analysis of the accuracy and convergence of Equation-free projection to the slow manifold.
Here, we present a new numerical method for the computation of coarse-grained stable and unstable manifolds of saddle equilibria/stationary states of microscopic dynamical simulators (and in general discrete-time black-box maps). Our method is based on the Equation-free framework. The approximation of the semi-local coarse-grained stable and unstable manifolds is achieved by a truncated polynomial expansion; the coefficients of the series are computed by the Newton-Raphson method applied on a coarse-grained map of the microscopic simulator. Thus, the proposed numerical method involves a three-step procedure including the Equation-free: (a) detection of the coarse-grained saddle (b) computation of the coarse-grained Jacobian on the saddle and the computation of the corresponding eigenmodes, (c) identification of the polynomial coefficients of the semi-local coarse-grained stable and unstable manifolds; this step involves (i) the numerical construction of a back-box coarse-grained map for the coefficients of the polynomial series, (ii) iterative estimation of the polynomial coefficients by applying Newton’s method around the constructed coarse-grained map. The method is illustrated through two examples whose stable and unstable manifolds are also approximated analytically through the parametrization method for accessing the efficiency of our proposed numerical method. The first example is a simple toy discrete-time map and the second one is a Gillespie-Monte Carlo realization of a simple catalytic reaction scheme describing the dynamics of CO oxidation on catalytic surfaces.
2 Computation of Stable & Unstable Manifolds of Saddles for Discrete-time Models
We will first present the way for approximating the stable and unstable manifolds of a saddle point for discrete-time systems when the equations are given in an explicit form. Then, we will show how one can approximate the stable and unstable manifolds when equations are not given in an explicit form. The later case includes large-scale black-box simulators as well as microscopic/stochastic multiscale models.
Let us consider the discrete-time model given by:
where is a smooth multivariable, vector-valued function having as initial condition.
Regarding the computation of the stable and unstable manifolds of the saddle point of the above discrete-time system, We will prove the following theorem:
Let us denote by ) the saddle fixed point of the discrete-time model (1) which satisfies . Let us also assume that the Jacobian is diagonalizable. Let be the matrix whose columns are the eigevectors of that correspond to the eigenvalues lying inside the unit circle, and be the matrix whose columns are the eigevectors of that correspond to the eigenvalues lying outside the unit circle. Let us also define and by the transformation , where . Then the fixed point has:
(A1) a -dimensional local stable manifold tangent to the subspace spanned by the columns of at the origin defined by:
where is a function which satisfies and , ; ) is the -th component of .
(A2) a -dimensional local stable manifold tangent to the subspace spanned by the columns of at the origin defined by:
where is a function which satisfies and , ; ) is the -th component of .
(B1) On the stable manifold the following system of functional equations hold:
(B2) On the unstable manifold the following system of functional equations hold:
In the above, is the (block) diagonal matrix containing the eigenvalues with and is the (block) diagonal matrix containing the eigenvalues with ; and are and vector-valued functions, respectively, obtained by
where corresponds to the -vector-valued nonlinear function:
containing all, but the linearization around the saddle, non-linear terms of satisfying: .
By taking , the model given by Eq. (1) reads:
By assuming that the vector field:
is differentiable on an open ball around and the right-hand-side of Eq. (8) around can be written as:
where is the Jacobian evaluated at and contains all the higher order non-linear terms of . For example if is expanded in a Taylor expansion around the saddle point then:
Since the Jacobian computed at the saddle is diagonalizable, it exists an invertible matrixsuch that
If all eigenvalues are real then is a diagonal matrix:
is the matrix whose columns are the eigenvectorsof the system’s Jacobian . If the Jacobian has a complex pair of eigenvalues , is a block diagonal matrix of the form
In that case, for the pair of complex eigenvectors the two corresponding columns of are assembled by the real and imaginary parts of the complex eigenvector .
On a saddle, of these eigenvectors correspond to eigenvalues with and of these eigenvectors correspond to eigenvalues with . Let us rearrange the columns of so that the matrix of eigenvalues can be written in a block form as:
where is the (block) diagonal matrix containing the eigenvalues with and is the (block) diagonal matrix containing the eigenvalues with .
Use the transformation
Hence, Eq. (21) can be written as:
where and are the sub-matrices of dimensions and , whose columns contain the eigenvectors corresponding to the eigenvalues inside and outside the unit disc, respectively. Note that and are uncoupled with respect to the linear terms.
Thus, Eq. (22) can then be re-written as:
Hence, according to the stable manifold theorem (see e.g. Kuznetsov2004; Wiggins2003) (A1) and (A2) hold true. Trajectories starting on approach the origin as , i.e.:
Trajectories starting on approach the origin as , i.e.:
In a similar manner, it can be shown that the equation given in (B2) holds true on the unstable manifold. ∎
2.1 Parametrization of the Stable and Unstable Manifolds with Truncated Polynomials
As by Theorem 1, the stable and unstable manifolds are smooth non-linear functions of and , respectively, then according to the Stone-Weierstrass theorem Rudin they can be approximated by any accuracy around by a sequence of polynomial functions of and , respectively.
For example, for the stable manifold (and similarly for the unstable manifold), we can write:
where , are polynomials (e.g. Chebyshev polynomials) of degree .
Truncating the series at degree we get the truncated polynomial approximation:
A simple choice would be to take as polynomials the powers of . In that case, Eq. (32) becomes:
For example if , , the above expression reads:
The existence of a local analytic solution for the form of nonlinear functional equations Eq.32 is guaranteed by the following theorem (see also Kazantzis2002:
Smajdor1968 Consider the following system of nonlinear functional equations:
where is an unknown function. Then if:
are analytic functions such that and
The function admits a formal power series solution.
The fixed point that satisfies is a hyperbolic point, i.e. none of the eigenvalues of the Jacobian is on the unit circle.
Then, the above system of nonlinear functional equations admits a unique solution on the form of formal power series which statisy .
where are nonlinear functions of :
Note that in general, both the left-hand side and the right hand-side of Eq.( 36) contain higher order terms than due to Eq.( 30) and the nonlinearities in .
By equating on both sides of Eq. 36 the terms up to an order with respect to , we get the following coupled system of nonlinear equations with respect to the polynomial coefficients :
The above system constitutes a nonlinear (in general) system of unknowns with equations that can be solved iteratively, e.g. using Newton-Raphson.
For example, let us consider the following discrete dynamical system:
The above system can be written as:
The stable manifold of the system given by Eq. (39) is given by .
Let us choose a power series expansion up to order two (i.e. ) of the stable manifold around the fixed point . Hence an approximation of the stable manifold is given by:
Here, . Hence, from Eq. (4) we get:
Thus, from Eq. (41) we have:
By equating the coefficients of the corresponding power series up to order two, we get the following system of equations:
From the above system we get:
Thus a parametrization of the stable manifold around the saddle point is given by:
The unstable manifold of the system given by Eq. 39 is the trivial , .
Let us again choose a power series expansion up to order two (i.e. ) of the unstable manifold around the fixed point . Hence an approximation of the stable manifold is given by:
Hence, from Eq. (5) we get:
For the above system of equations it can be easily verified that the unstable manifold is the one with , . ∎
3 Numerical Approximation of the Stable Manifolds of Microscopic-Stochastic Multiscale and Black-Box Simulators
Let us assume that due to the complexity of the underlying dynamics evolving across temporal and spatial scales, explicit model equations (such as the ones given by Eq. (1)) for the macroscopic (emergent) level are not available in a closed form. Under this hypothesis, we cannot follow the procedure for the analytical approximation of the invariant manifolds as one needs to explicitly know the operator (i.e. and in Eq. 6).
Thus, when explicit macroscopic equations are not available in a closed form, but a microscopic dynamical simulator is available, the approximation of the invariant manifolds at the macroscopic (the coarse-grained) level requires (a) the bridging of the micro and macro scale, and (b) the numerical approximation of the coarse-grained manifolds. In what follows, we address a new multiscale numerical method for the numerical approximation of the invariant manifolds based on the Equation-Free framework.
Thus, let as assume, that we have a microscopic (such as Brownian dynamics, Monte Carlo, Molecular Dynamics, Agent-based) computational model that, given a microscopic/ detailed distribution of states
at time , will report the values of the evolved microscopic/detailed distribution after a time horizon :
is the time-evolution microscopic operator, is the vector of the complex system parameters.
A basic assumption underlying the concept of Equation-Free numerical framework is that after some time the emergent coarse-grained dynamics are governed by a few variables, say, . Usually these “few” observables are the first few moments of the underlying microscopic distribution. This implies that there is a slow coarse-grained manifold that can be parametrized by . The assumption of the existence of a slow coarse-grained manifold asserts that the higher order moments of the microscopic distribution, say, , of the microscopic distribution become, relatively fast over time, functionals of the lower-order moments of the microscopic distribution described by the vector . This dependence can be described at the moments-space as a singularly perturbed system of the form:
where is a sufficiently small number. Under the above description and assumptions the following Theorem can be proved.
Theorem 3 (Fenichel’s Theorem Fenichel1979).
. Let us assume that the functions , in an open set around a hyperbolic fixed point. Then the dynamics of the system given by Eq. 54 can be reduced to:
on a smooth manifold defined by:
The manifold is diffeomorphic and close to the manifold defined for . Moreover, the manifold is locally invariant under the dynamics given by Eq. (54).
defines the “slow” manifold on which the dynamics of the system evolve after a short (in the macroscopic scale) time horizon.
Under this perspective and under the assumptions of the Fenichel’s theorem Fenichel1979 let us define the coarse-grained map:
where is a smooth multivariable, vector-valued function having as initial condition and .
The above coarse-grained map which describes the system dynamics on the slow coarse-grained manifold can be obtained by finding that relates the higher order moments of the microscopic distribution to the lower order moments .
The Equation-free approach through the concept of the coarse timestepper bypasses the need to extract such a relation analytically which in most of the cases is an “overwhelming” difficult task and can introduce modelling biases (see the critical discussion in Reppas2012). The Equation-free approach provides such relations in a numerical way “on demand”: relatively short calls of the detailed simulator provide this closure (refer to Kevrekidis2003; Kevrekidis2004; Siettos2003 for more detailed discussions). Briefly, the coarse timestepper consists of the following basic steps:
Given the set of the macroscopic variables at time :
(a) Prescribe the coarse-grained initial conditions .
(b) Transform them through a lifting operator to consistent microscopic distributions .
(c) Evolve these distributions in time using the microscopic/detailed simulator for a short macroscopic time to get . The choice of is associated with the (estimated) spectral gap of the linearization of the unavailable closed macroscopic equations.
(d) Obtain again the values of the coarse-grained variables using a restriction operator : .
The above steps, constitute the black box coarse timestepper, that, given an initial coarse-grained state of the system , at time will report the result of the integration of the microscopic rules after a given time-horizon (at time ), i.e. .
Now one can “wrap” around the coarse timestepper (given by Eq.(57)), numerical methods such as the Newton-Raphson method (for low-order systems) to converge to coarse-grained fixed points and investigate their stability. For large-scale systems one can also employ matrix-free methods
such as Newton-GMRES Kelley1995 to find the coarse-grained fixed points and Arnoldi iterative algorithms Saad2011 to estimate the dominant eigenvalues of the coarse linearization, that dictate the stability of the coarse-grained fixed points of the unavailable
macroscopic evolution equations.
The coarse-grained Jacobian can be computed by appropriately perturbing the coarse-grained initial conditions fed to the coarse timestepper (3). For low to medium dimensions the column of the Jacobian matrix can be evaluated numerically as
where is the unit vector with one at the component and zero in all other components.
Then one can solve the eigenvalue problem