Numerical computation of periodic orbits and isochrons for state-dependent delay perturbation of an ODE in the plane

05/12/2020 ∙ by Joan Gimeno, et al. ∙ University of Barcelona Georgia Institute of Technology 0

We present algorithms and their implementation to compute limit cycles and their isochrons for state-dependent delay equations (SDDE's) which are perturbed from a planar differential equation with a limit cycle. Note that the space of solutions of an SDDE is infinite dimensional. We compute a two parameter family of solutions of the SDDE which converge to the solutions of the ODE as the perturbation goes to zero in a neighborhood of the limit cycle. The method we use formulates functional equations among periodic functions (or functions converging exponentially to periodic). The functional equations express that the functions solve the SDDE. Therefore, rather than evolving initial data and finding solutions of a certain shape, we consider spaces of functions with the desired shape and require that they are solutions. The mathematical theory of these invariance equations is developed in a companion paper, which develops "a posteriori" theorems. They show that, if there is a sufficiently approximate solution (with respect to some explicit condition numbers), then there is a true solution close to the approximate one. Since the numerical methods produce an approximate solution, and provide estimates of the condition numbers, we can make sure that the numerical solutions we consider approximate true solutions. In this paper, we choose a systematic way to approximate functions by a finite set of numbers (Taylor-Fourier series) and develop a toolkit of algorithms that implement the operators – notably composition – that enter into the theory. We also present several implementation results and present the results of running the algorithms and their implementation in some representative cases.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Many phenomena in nature and technology are described by limit cycles and by now there is an extensive mathematical theory of them [Min62, AVK87].

These limit cycles often arise in feedback loops between effects that pump and remove energy in ways that depend on the state of the system. When the feedback happens instantaneously, these phenomena are modeled by an ordinary differential equation (ODE). Nevertheless, in many real phenomena, the feedback takes time to start acting. In such cases, the appropriate models are delay differential equations (DDE’s) in which the time derivative of a state is given by an expression which involves the state at a previous time. Such delays are documented to be important in several areas of science and technology (e.g. in electrodynamics, population dynamics, neuroscience, circuits, manufacturing, etc. See

[HKWW06] for a relatively recent survey documenting many areas where DDE’s are important models).

Note that, from the mathematical point of view, adding even a small delay term in the ODE model is a very singular perturbation since the nature of the problem changes drastically. Notably, the natural phase spaces in delay equations are infinite dimensional (there is some discussions about what are the most natural ones) rather than the finite dimensional phase spaces of ODE.

One would heuristically expect that, if the delay term is a small quantity, there are solutions of the delay problem that resemble the solutions of the unperturbed ODE. Due to the singular perturbation nature of the problem, justifying this intuitive idea is a nontrivial mathematical task. Of course, besides the finite dimensional set of solutions that resemble the solutions of the ODE, one expects many other solutions, which may be very different.


The recent rigorous paper [YGdlL20], describes a formalism to study the effect of introducing a delay to an equation in the plane with a limit cycle. The paper [YGdlL20] shows that, in some appropriate sense, the solutions of the ordinary differential equation persist. The method in [YGdlL20] is constructive since it is based on showing that the iterations of an explicit operator converge.

The goal of this paper is to present algorithms and implementation details for the mathematical arguments developed in [YGdlL20]. One also expects that solutions we compute – and which resemble the solutions of the ODE – capture the full dynamics of the SDDE in the sense that the solutions of the SDDE in a neighborhood converge to this finite dimensional solution family very fast.

The algorithms consist in specifying discretizations for all the functional analysis steps in [YGdlL20]. We do not present rigorous estimates on the effects of discretizations (they are in principle applications of standard estimates), but we present analysis of running times. We have implemented the algorithms above and report the results of running them in some representative examples. In our examples, one can indeed obtain very accurate solutions in a few minutes in a standard today’s laptop. Thanks to the a posteriori theorems in [YGdlL20], we can guarantee that these solutions correspond to true solutions of the problem.

We recall that the method of [YGdlL20] consists in bypassing the evolution and formulating the existence of periodic orbits (and the solutions converging to them) as the solutions in a class of functions with periodicity. Furthermore, the paper [YGdlL20] establishes an a posteriori theorem which states that given a sufficiently approximate solution of the invariance equation, there is a true solution close to it. To be more precise, an approximate solution is sufficiently approximate, if the error is smaller than an explicit expression involving several properties of the approximate solution (commonly called condition numbers).

The numerical methods developed and run here, produce an approximate solution and obtain estimates on the condition numbers. So, we can be quite confident that the solutions produced by our numerical methods correspond to true solutions.

Remark 1.1.

Although the results in [YGdlL20] have been detailed for the case of SDDE’s, they also apply without major modifications to advanced or even mixed differential equations.

Remark 1.2.

The paper [YGdlL20] includes a theory for different regularities of the differential equation, but in this numerical paper, we will only formulate results for analytic systems. Since we will specify which derivatives appear in the calculations, it is clear that the algorithms apply also for problems with finite differentiability.

Remark 1.3.

One of the consequences of the approach in [YGdlL20] is that one can easily obtain smooth dependence on parameters for the solutions. Note that if one studies the periodic orbits as fixed points of an evolution operator, one needs to study the smooth dependence of the solutions on the initial data and on parameters, which is a delicate question for general solutions. See [HVL93, Chapter 3.7].

Remark 1.4.

The a posteriori results justify that the approximate solution is independent of the method for which it has been produced.

Besides the numerical approximations, it is also customary in applied mathematics to produce approximate solutions using formal asymptotic expansions. For the problem at hand, the paper [CCdlL19] develops formal asymptotic expansions of the periodic solutions in powers of the term in the delay.

The expansions in [CCdlL19] are readily computable with the methods presented here. They can be taken as starting points for the fixed point method in [YGdlL20]. Moreover, the a posteriori results of [YGdlL20] show that these expansions are asymptotic in a very strong sense.

Remark 1.5.

The paper [YGdlL20] also includes some local uniqueness statements of the solutions (under the condition that the solutions have a certain shape). Hence the numerically computed approximate solutions of the invariance equation identify a unique solution, which is unambiguous. This uniqueness is crucial to compare different numerical runs as well as to obtain smooth dependence on parameters.

The uniqueness in [YGdlL20] is somewhat subtle. The limit cycle is unique as well as the Taylor expansions of isochrons (their parameterizations are unique once we fix origins of coordinates and scales). On the other hand, the full isochrons are unique only when one specifies a cut-off. Similar effects happen in the study of center manifolds [Sij85].

From the numerical point of view, we only compute the limit cycle and a finite Taylor expansion of the isochrons. The error of the reminder of the Taylor expansion is indeed very small (much smaller than other sources of numerical error, which are already small).

1.1. Organization of the paper

The paper is organized in an increasing level of details trying to guide the reader from the general steps of the algorithms to the more specialized and hardest steps of them.

First of all, we detail in section §2 an overview of the method developed in [YGdlL20]. In particular, we first introduce the situation of the unperturbed problem in §2.1 in order to move to the perturbed problem in §2.2. This will lead to the explicit expression of the invariance equation in §2.3 and the periodicity and normalization conditions in §2.6 and §2.7.

Our results start from the unperturbed case in [HdlL13], we will summarize in §3 the steps and add practical comments for numerically computing a parameterization in the unperturbed case.

The algorithms that allow to solve the invariance equation introduced in §2.3 are fully detailed in section §4.

The numerical composition of periodic mappings as well as its computational complexity needs special care. Hence, section §5 explains in detail such a process in a Fourier representation.

Finally, section §6 reports the results of some numerical experiments.

2. Overview of the problem and the method

2.1. The parameterization method for limit cycles and their isochrons in ODE’s

Our starting point is the main result in [HdlL13], which we recall informally (omitting precisions on regularity, domains of definition, etc).

Given an analytic ordinary differential equation (ODE) in the plane


with a (stable) limit cycle, there is an analytic local diffeomorphism , in particular a local change of variables, defined from to , a frequency and a rate such that


Hence, if and satisfy the very simple ODE



is a solution of (1) in a neighborhood of the limit cycle.

Therefore, the paper [HdlL13] trades finding all the solutions near the limit cycle of (1) for finding, , and solving (2). The paper [HdlL13] also develops efficient algorithms for the study of (2), the so-called invariance equation.

The key idea of the formalism in [YGdlL20] consists in accommodating the delay by just changing the equation (2). We will obtain a modified functional equation, which involves non-local terms that reflect the delay in time. This equation was treated in [YGdlL20]. Hence, we will produce a two dimensional family of solutions of the delay problem which resemble the solutions of the unperturbed problem (1).

The solutions we construct are analogues for the SDDE of the limit cycle as well as the solutions that converge to the limit cycle exponentially fast (notice that for the simple ODE, these are all the solutions with initial data in a neighborhood of the limit cycle).

The set is called in the biology literature the “isochron” of because the orbit of a point in converges to the limit cycle with a phase . See [Win75].

Remark 2.1.

The theory of normally hyperbolic manifolds shows that the isochrons are the same as the stable manifolds of points [Guc75] (see also [CL04] for generalizations beyond normal hyperbolicity).

Therefore, in the ODE case, isochrons and stable manifolds can be used interchangeably. In the SDDE case, however, the stable manifolds are infinite dimensional objects. The solutions we construct are finite dimensional families. To avoid confusion with the stable manifolds in [HVL93], we prefer to maintain the name isochrons to refer to the solutions we construct. Thus, the isochrons we constuct are subsets of the (infinite dimennsional) manifolds constructed in [HVL93]

. As a matter of fact, they are slow manifolds, they correspond to the least stable eigenvalues. One expects that, in applications, the isochrons will be the most observable solutions since they correspond to the modes that decrease the slowest so that any solution will converge to the isochron much faster than the isochron converges to the limit cycle (an analogue with what happens in ODE in a stable node).

2.2. The perturbed problem

We consider now a perturbation of (1) of the form


where , and and the function is positive and as smooth as we need, hence bounded in compact sets.

The equation (4) is a state-dependent delay differential equation (SDDE) for . For typographical reasons we will denote .

2.3. The invariance equation in the perturbed problem

Let be the universal cover of the 1-dimensional torus and let us consider , the frequency as and the rate as which solve (2). They correspond to the case in (4).

In analogy with the ODE case, we want to find a with periodicity in the first variable and numbers and such that for all and ,


is a solution of (4).

The mapping gives us a parameterization of the limit cycle with its isochrons via . That is, the limit cycle will be represented by the set and the isochron associated to the angle in will be where denotes a region of validity in in the solution (5).

Note that, heuristically (and it is also shown in [YGdlL20]) is close to the identity map and and are close to the values in the unperturbed case. Hence, we will produce a two-dimensional family of solutions of the delayed equation (4) which resemble the solutions of the ODE.

Remark 2.2.

Since the phase space of the delay equation is infinite dimensional, we expect that there are many more solutions of (4).

Based on the theory of [HVL93, Chapter 10], we know that the periodic orbit in the phase space of the SDDE is locally unique, hence the solution we produce has to agree with the periodic solution produced in [HVL93, Theorem 4.1].

The theory of delay equations [HVL93, Chapter 10] (specially Theorem 3.2) produces stable (or strong stable) manifolds in the (infinite dimensional) phase space. The stable manifolds produced in [HVL93] are infinite dimensional. Note that, since the evolution operators are compact, most of the eigenvalues of the evolution are very small, in particular, smaller than the we will select later, so that the solutions in the stable manifold converge to the space of solutions produced here in a very fast way.

Imposing that the tuple is such that (5) is a solution of (4) and knowing that the tuple is also a solution of (2) but for , then


where refers to the second component of .

Now, since is a local diffeomorphism, it also acts as a change of variable. In particular, we can premultiply (6) by to get the functional equation, whose unknowns are , and . That functional equation will be called invariance equation,


where we use the shorthand

The equation (7) will be the center of our attention. Let us start by making some preliminary remarks on it.

We have ignored the precise definition of the domain of the function . We need the range of to be contained in the domain of . Note also that it is not clear that the domain of the RHS can match the domain of the LHS of (7). As it turns out, this will not matter much for our treatment providing a small enough (see [YGdlL20] for a detailed discussion).

The equation (7) is underdetermined. That means, if and solve equation (7), then and also solve the same equation with


The parameter and correspond respectively to choosing a different origin in the angle coordinate and a different scale of the parameter .

Even if all these solutions in (8) are mathematically equivalent, we anticipate that choosing a different can change the reliability of the numerical algorithms.

In [YGdlL20], it is shown that the solutions in the family (8) are locally unique. That is, all the solutions of the invariance equation (7) are included in (8). In equations (12) and (13) we introduce normalization conditions that specify the parameters in (8). This is useful for numerics since it allows to compare easily solutions obtained in different runs with different discretization sizes.

Remark 2.3.

From the point of view of analysis, one of the main difficulties of the equation (7) is that it involves a function composed with itself (hence the operator is not really differentiable). Also the term does not have the same domain as . We refer to [YGdlL20] for a deeper discussion in the composition domain.

Similar problems appear in the treatment of center manifolds [LI73, Car81] and indeed, in [YGdlL20] there are only results for finite differentiable solutions and the solutions obtained may depend on cut-offs and extensions taken to solve the problem.111On the other hand, the coefficients of the expansion in powers of are unique and do not depend on cut-offs and extensions.

Based on the experience with center manifolds, we believe that indeed, the solutions could only be finitely differentiable and that there are different solutions of the invariance equation (depending on the extensions considered).

Remark 2.4.

In the language of ergodic theory, for those familiar with it, the results of [YGdlL20] can be described as saying that there is a factor in the (infinite dimensional) phase space of the SDDE which is a two dimensional flow with dynamics close to the dynamics of the ODE.

In this paper, we will compute numerical approximations of the map giving the semiconjugacy as well as the new dynamics of such a factor.

2.4. Format of solution for the invariance equation (7)

It is shown in [YGdlL20] that, for small , one can construct smooth solutions of (7) of the form


where and with and for some .

As we will see in more detail later on, if one substitutes (9) into (7) and matches powers in , one gets a finite set of recursive equations for the coefficients of the expansion (and for , ). We will deal with these equations in detail later. Note that this will require a discretization of , which are only functions of the angle .

2.5. The equations for terms of the expansion of

Assume for the moment that

and have already been computed. Then we can substitute the expansion (9) of in powers of into the invariance equation (7). Matching the coefficients of the powers of on both sides we obtain a hierarchy of equations for , .

The equations for involve just . Hence, they can be studied recursively. In [YGdlL20] it is shown that, if we know , it is possible to find in a unique way and, hence, we can proceed to solve the equations recursively. In this paper, we show that there are precise algorithms to compute these recursions. We also report results of implementation in some cases.

Note that , in (9) are functions only of . The function depends both on and but vanishes at high order in and does not enter in the equations for .

As it frequently happens in perturbative expansions, the low order equations are special. The equation for – which gives the periodic solution that continues the limit cycle – also determines . The equation for also determines . The equations for , are all similar and involve solving the same operator (with different terms).

In this paper, we will only consider the computation of the , . The term , is estimated in [YGdlL20] and it is not only high order in but also actually small in rather large balls.

Note that even if the are unique (up to the parameters in (8)), the depends on properties of the extension considered. This is, of course very reminiscent of what happens in the theory of center manifolds [Sij85] . For numerical studies of expansions of center manifolds we refer to [BK98, Jor99, PR06] and detailed estimates of the truncation in [CR12]. The numerical considerations about the effect of the truncation apply with minor changes to our case.

2.6. Periodicity conditions

From the point of view of implementation in computers, it is convenient to think of the functions and in (7) which involve angle variables (and which range on angles), as real functions with boundary conditions (in mathematical language this is described as taking lifts). Hence, we take


Notice that we are normalizing the angles to run between and . Very often, the angles are taken to run in .

The periodicity conditions in (10) indicate the second component of is periodic (it describes a radial coordinate) in while the first component increases by when increases by (it describes an angle). So that the circle described by increasing makes the angle in the coordinate go around, so that it is a non-contractible circle in the angle.

For the expansion of in powers of as in (9), the periodicity conditions amount to:


In the numerical analysis, there are many well-known ways to discretize periodic functions. We will use Fourier series, but there are also other alternatives such as periodic splines.

In general, for functions with , we define which is a periodic function, i.e. for all , . Then we will discretize and rewrite the functional equations so that this is the only unknown.

2.7. Normalization of the solutions

As indicated in the discussion, the invariance equation has two obvious sources of indeterminacy: One is the choice of the origin of the variable (the in (8)) and the other is the choice of the scale of the variable (the in (8)) . In [YGdlL20] it is shown that these are the only indeterminacies and that once we fix them, we can get any other solution by applying (8).

A convenient way to fix the origin of is to require


where is an initial approximation and is a real number, typically it is closed to . This normalization is easy to compute and is rather sensitive since, when we move in the family (8), the derivative with respect to the shift is a positive number.

The normalization of the origin of coordinates, has no numerical consequences except for the possibility of comparing the solutions in different runs. The solutions corresponding to different normalizations have very similar properties. The numerical algorithm 4.1 in its step 5 leads to a small drift in the normalization in each iteration, but it is guaranteed to converge to one of the solutions in (8).

The second normalization is just a choice of the eigenvector of an operator. We have found it convenient to take


with a real .

We anticipate that changing the value of is equivalent to changing into where is commonly named scaling factor.

All the choices of are mathematically equivalent – they amount to setting the scale of the parameter –. The choice of this normalization, however, affects the numerical accuracy dramatically. Notice that if we change into , the coefficients in (9) change into . So, different choices of may lead the Taylor coefficients to be very large or very small, which makes the computations with them very susceptible to round off error. It is numerically advantageous to choose the scale in such a way that the Taylor coefficients have a comparable size.

In practice, we run the calculations twice. A preliminary one whose only purpose is to compute an approximation of the scale that makes the coefficients to remain more or less of the same size. Then, a more definitive calculation can be run. That last running is more numerically reliable.

Remark 2.5.

In standard implementation of the Newton Method for fixed points of a functional, say , the fact that the space of solutions is two dimensional leads to have a two dimensional kernel, and not being invertible.

In our case, we will develop a very explicit and fast algorithm that produces an approximate linear right inverse. This linear right inverse leads to convergence to an element of the family (8).

3. Computation of – unperturbed case

For completeness, we quote the Algorithm 4.4 in [HdlL13] adding some practical comments. That algorithm allows us to numerically compute , and in (7). We note that the algorithm has quadratic convergence as it was proved in [HdlL13].

Algorithm 3.1.

Quasi-Newton method

  1. Input: in , , , and scaling factor .

  2. Output: , and such that .

  3. .

  4. Solve and denote .

  5. and .

  6. and .

  7. Solve imposing

  8. Solve imposing

  9. .

  10. Update: , and .

  11. Iterate (1) until convergence in , and . Then undo the scaling .

Algorithm 3.1 requires some practical considerations:

  1. It is clear that the most delicate steps of above algorithm are 5 and 6, which are often called cohomology equations. These steps involve solving PDE’s whereas the others are much simpler. Indeed, the discretizations used are dictated by the desire of solving these equations in an efficient way.

  2. Initial guess. will be a parameterization of the periodic orbit of the ODE with frequency . It can be obtained, for instance, by a Poincaré section method, continuation of integrable systems or Lindstedt series. An approximation for and can be obtained by solving the variational equation

    Hence if is the eigenpair of such that , then .

  3. Stopping criteria. As any Newton method, a possible condition to stop the iteration can be when either or is smaller than a given tolerance.

    Note that the a posteriori theorems in [HdlL13] give a criterion of smallness on the error depending on properties of the function . If these criteria are satisfied, one can ensure that there is a true solution close to the numerical one.

  4. Uniqueness. Note that in the steps 5 and 6, which involve solving the cohomology equations, the solutions are determined only up to adding constants in the zero or first order terms. We have adopted the conventions (14), (15). These conventions make the solution operator linear (which matches well the standard theory of Nash-Moser methods since it is easy to estimate the norm of the solutions).

    As it is shown in [HdlL13], the algorithm converges quadratically fast to a solution, but since the problem is underdetermined, we have to be careful when comparing solutions of different discretization. In [HdlL13] there is discussion of the uniqueness, but for our purposes in this paper, any of the solutions will work. The uniqueness of the solutions considered in this paper is discussed in section §2.7.

  5. Convergence. It has been proved in [HdlL13] that even of the quasi-Newton method, it still has quadratic convergence.

    Note that it is remarkable that we can implement a Newton like method without having to store – much less invert – any large matrix. Note also that we can get a Newton method even if the derivative of the operator in the fixed point equation has eigenvalues . See remark 2.5.

3.1. Fourier discretization of periodic functions

As it was mentioned before, the key step of Algorithm 3.1 is to solve the equations in steps 5 and 6. Their numerical resolution will be particularly efficient when the functions are discretized in Fourier-Taylor series. This will be the only discretization we will consider in this paper providing a deep discussion.

Remark 3.2.

Even if we will not use it in this paper, we remark that [HdlL13, §4.3.1] there are two methods to solve them. One assumes a Fourier representation in terms of the angle and the other uses integral expressions which can be evaluated efficiently in several discretizations of the functions (e.g. splines). The spline representation could be preferable to the Fourier-Taylor in some regimes where the limit cycles are bursting.

Recall that a function is called periodic when for all .

To get a computer representation of a periodic function, we can either take a mesh in , i.e. and store the values of at these points: with or we can take advantage of the periodicity and represent it in a trigonometric basis.

The Discrete Fourier Transform (DFT), and also its inverse, allows to switch between the two representations above. If we fix a mesh of points of size

uniformly distributed in , i.e. , the DFT is:

so that


or equivalently


In the case of a real valued function, is real and the complex numbers satisfy Hermitian symmetry, i.e. (denoting by the complex conjugate), which implies real when is even. Then, we define real numbers if

is odd, here

denotes the ceil function, otherwise defined by

with .

Thus, can be approximated by


where the coefficient only appears when is even and it refers to the aliasing notion in signal theory.

Therefore (18) is equivalent to (16) but rather than real numbers, only half of them are needed.

Henceforth, all real periodic functions can be represented in a computer by an array of length whose values are either the values of on a grid or the Fourier coefficients. These two representations are, for all practical purposes equivalent since there is a well known algorithm, Fast Fourier Transform (FFT), which allows to go from one to the other in operations. The FFT has very efficient implementations so that the theoretical estimates on time are realistic (we can use fftw3 [FJ05], which optimizes the use of the hardware).

We can also think of functions of two variables where one variable is periodic and the other variable is a real variable. In the numerical implementations, the variable will be discretized as a polynomial. Thus can be thought as a function of taking values in polynomials of length . Hence, a function of two variables with periodicity as above will be discretized by an array . The meaning could be that it is a polynomial for each value of in a mesh or that it a polynomial of whose coefficients are Fourier coefficients. Alternatively, we could think of as a polynomial in taking values in a space of periodic functions.

This mixed representation of Fourier series in one variable and power series in another variable, is often called Fourier-Taylor series and has been used in celestial mechanics for a long time, dates back to [BG69] or earlier. We note that, modern computer languages allow to overload the arithmetic operations among different types in a simple way.

It is important to note that all the operations in Algorithm 3.1 are fast either on the Fourier representation or in the values of a mesh representation. For example, the product of two functions or the composition on the left with a known function are fast in the representation by values in a mesh. More importantly for us, as we will see, the solution of cohomology equations is fast in the Fourier representation. On the other hand, there are other steps of Algorithm 3.1, such as adding, are fast in both representations.

Similar consideration of the efficiency of the steps will apply to the algorithms needed to solve our problem. The main novelty of the algorithms in this paper compared with those of [HdlL13] is that we will need to compose some of the unknown functions (in [HdlL13] the unknowns are only composed on the left with a known function). The algorithms we use to deal with composition will be presented in section §5. The composition operator will be the most delicate numerical aspect, which was to be expected, since it was also the most delicate step in the analysis in [YGdlL20]. The composition operator is analytically subtle. A study which gives examples that results are sharp is in [dlLO99]. See also [AZ90].

Remark 3.3.

Fourier series are extremely efficient for smooth functions which do not have very pronounced spikes. For rather smooth functions – a situation that appears often in practice – it seems that Fourier Taylor series is better than other methods.

It should be noted, however that in several models of interest in electronics and neuroscience, the solutions move slowly during a large fraction of the period, but there is a fast movement for a short time (bursting). In these situations, the Fourier scheme has the disadvantage that the coefficients decrease slowly and that the discretization method does not allow to put more effort in describing the solutions during the times that they are indeed changing fast. Hence, the Fourier methods become unpractical when the limit cycles are bursting. In such cases, one can use other methods of discretization. In this paper, we will not discuss alternative numerical methods, but note that the theoretical estimates of [YGdlL20] remain valid independent of the method of discretization. We hope to come back to implementing the toolkit of operations of this paper in other discretizations.

Remark 3.4.

One of the confusing practical aspects of the actual implementation is that the coefficients of the Fourier arrays are often stored in a complicated order to optimize the operations and the access during the FFT.

For example concerning the coefficients ’s and ’s in (18), in fftw3, the fftw_plan_r2r_1d uses the following order of the Fourier coefficients in a real array .

where the index is taken into consideration if and only if is even. Another standard order in other packages is just in sequential order or if is odd.

To measure errors and size of functions represented by Fourier series, we have found useful to deal with weighted norms involving the Fourier coefficients.

where, again, the term for only appears if is even.

The smoothness of can be measured by the speed of decay of the Fourier coefficients and indeed, the above norms give useful regularity classes that have been studied by harmonic analysts.

Remark 3.5.

The relation of the above regularity classes with the the most common is not straightforward, as it is well known by Harmonic analysts, [Ste70].

Riemann-Lebesgue’s Lemma tells us that if is continuous and periodic, as and in general if is times differentiable, then tends to zero. In particular, for some constant .

In the other direction, from we cannot deduce that .

One has to use more complicated methods. In [dlLP02] it was found that one could find a practical method based on Littlewood-Paley theorem (see [Ste70]) which states that the function is in -Hölder space with if and only if, for each there is constant such that for all .

The above formula is easy to implement if one has the Fourier coefficients, as it is the case in our algorithms.

3.2. Solutions of the cohomology equations in Fourier representation

Under the Fourier representation we can solve the cohomological equations in the steps 5 and 6 of the Algorithm 3.1.

Proposition 3.6 (Fourier version, [HdlL13]).

Let .

  • If , then has solution and

    for all real . Imposing , then .

  • If , then has solution and

    for all real . Imposing , then .

The paper [HdlL13] also presents a solution in terms of integrals. Those integral formulas for the solution are independent of the discretization and work for discretizations such as Fourier series, splines and collocations methods. Indeed, the integral formulas are very efficient for discretizations in splines or in collocation methods. In this paper we will not use them since we will discretize functions in Fourier series and for this discretization, the methods described in Proposition 3.6 are more efficient.

3.3. Treatment of the step 2 in Algorithm 3.1

To solve the linear system in the step 2 of Algorithm 3.1, we can use Lemma 3.7, whose proof is a direct power matching.

Lemma 3.7.

Consider the equation for given by Let where are given. More explicitly:

Then, the coefficients are obtained recursively by solving

which can be done provided that is invertible and that one knows how to multiply and add periodic functions of .

We also recall that composition of a polynomial in the left with a exponential, trigonometric functions, powers, logarithms (or any function that satisfies an easy differential equation) can be done very efficiently using algorithms that are reviewed in [HCF16] which goes back to [Knu81].

We present here the case of the exponential which will be used later on in Algorithm 4.2.

If is a given polynomial – or a power series – with coefficients , we see that satisfies

Equating like powers on both sides, it leads to , and the recursion:

Note that this can also be done if the coefficients of are periodic functions of (or polynomials in other variables). In modern languages supporting overloading or arithmetic functions, all this can be done in an automatic manner.

Note that if the polynomial has degree , the computation up to degree takes operations of multiplications of the coefficients.

4. Computation of – perturbed case

The main result in the paper [YGdlL20] states that if in (7) is small enough, a periodicity condition like (12) and a normalization like (13) are considered, then there exists a unique tuple verifying (7), (12) and (13).

The formulation of that result in [YGdlL20] is done in a posteriori format which ensures the existence of a true solution once an approximate enough solution is provided as initial guess for the iterative scheme.

Moreover, it also gives the Lipschitz dependence of the solution on parameters which allows to consider a continuation approach.

We refer to [YGdlL20] for a precise formulation of the result involving choices of norms to measure the error in the approximate solutions.

4.1. Fixed point approach

We compute all the coefficients of the truncated expression in (9) order by order. The zero and first orders require a special attention due to the fact that the values and are obtained in the equation (7) matching coefficients of and respectively. The condition that allows to obtain comes from the periodicity condition (11). The mapping is not a periodic function. But we can use it to get a periodic one defined by . The condition for is given by the normalization condition (13). As in the unperturbed case, we are allowed to use a scaling factor. The use of such a scaling factor allows to set the value of in (13) equal to .

Algorithm 4.1 sketches the fixed-point procedure to get and whose periodicity condition is ensured in step (5). In this case the initial condition will be (the value for ) for and for since is close to the identity.

Algorithm 4.1 ( case).

Let .

  1. Input: , , , , and .

  2. Output: and .

  3. and .

  4. .

  5. Solve . Let .

  6. and .

  7. Solve imposing .

  8. Solve .

  9. Iterate (2) until convergence in and .

Algorithm 4.2 sketches the steps to compute and for . The initial guesses are for , for and for . In either case, it is required to solve a linear system of the form of Lemma 3.7 as well as cohomological equation similar to the unperturbed case.

Algorithm 4.2 ( case and case with ).

Let .

  1. Input: , , , , , , , for , and .

  2. Output: either and or .

  3. .

  4. If , and .

  5. .

  6. .

  7. . Let .

  8. If , then .

  9. Solve .

  10. Solve .

  11. Iterate (2) until convergence. Then undo the scaling .

Both algorithms 4.1 and 4.2 have non-trivial parts, such as, the effective computation of , the numerical composition of with and also with (see §5), the effective computation of the step 4 in Algorithm 4.2, the stopping criterion (see §4.1.1) and the choice of the scaling factor (see §4.1.2). On the other hand, there are steps that we can use the same methods in the unperturbed case, such as, the solution of linear systems like step 3 in Algorithm 4.2 via Lemma 3.7 or the solutions of the cohomological equations via Proposition 3.6.

4.1.1. Stopping criterion

algorithms 4.1 and 4.2 require to stop the iterations when prescribed tolerances have been reached. Alternatively, one can stop when the invariance equation is satisfied up to a given tolerance.

4.1.2. Scaling factor

As in the unperturbed case, if is a solution, then will be a solution too for any and . A difference with the case is that now and are required to be well-defined. That means the second components of and must lie in . Stronger conditions are

In the iterative scheme of Algorithm 4.2, these series become finite sums and a condition for the value is led by the upper-bound where is the value so that and, similarly, the value verifying . Notice that, the solutions and exist because , and the polynomials are strictly positive for .

5. Numerical composition of periodic maps

The goal of this section is to deeply discuss how we can numerically compute , the compositions and only having a numerical representation (or approximation) of and in the algorithms 4.1 and 4.2.

There are a variety of methods that can be employed to numerically get the composition of a periodic mapping with another (or the same) mapping. Some of these methods depend strongly on the representation of the periodic mapping and others only depend on specific parts of the algorithm.

We start the discussion from the general methods to those that strongly depend on the numerical representation. One expects that the general ones will have a bigger numerical complexity or it will be less accurate.

Before starting to discuss the algorithms, it is important to stress again that for functions of two variables , there are two complementary ways of looking at them. We can think of them as functions that given produce a polynomial in – this polynomial valued function will be periodic in – or we can think of them as polynomials in taking values in spaces of periodic functions (of the variable ). Of course, the periodic functions that appear in our interpretation can be discretized either by the values in a grid of points or by the Fourier transform.

Each of these – equivalent! – interpretations will be useful in some algorithms. In the second interpretation, we can “overload” algorithms for standard polynomials to work with polynomials whose coefficients are periodic functions (in particular Horner schemes). In the first interpretation, we can easily parallelize algorithms for polynomials for each of the values of using the grid discretization of periodic functions.

Possibly the hardest part of algorithms 4.1 and 4.2 is the compositions between with and with . Due to the step 4 of Algorithm 4.2 the composition should be done so that the output is still a polynomial in with coefficients that are periodic functions of .

In our implementation, we use the Automatic Differentiation (AD) approach [HCF16, GW08].

If is a function of two variables taking values in , then


which can be evaluated with polynomial products and polynomial sums using Horner scheme, once we have computed .

The problem of composing a periodic function with a periodic polynomial in – to produce a polynomial in taking values in the space of periodic functions – is what we consider now.

The most general method considers a periodic function, the in (19), and a polynomial of a fixed order where the are periodic functions of that we consider discretized by their values in a grid.

We want to compute the polynomial up to order . Assume that for

are given as input and that they have been previously computed in a bounded computational cost. The chain rule gives us a procedure to compute the coefficients of


Indeed, one can build a table, whose entries are polynomials in , like in Table 1 following the generation rule in Figure 1.

Figure 1. Generation rule for Table 1 entries

The inputs of Table 1 are for and . Then the entries with are given by


Thus, the coefficients of are for .