DeepAI

# Magnus integrators for linear and quasilinear delay differential equations

A procedure to numerically integrate non-autonomous linear delay differential equations is presented. It is based on the use of an spectral discretization of the delayed part to transform the original problem into a matrix linear ordinary differential equation which is subsequently solved with numerical integrators obtained from the Magnus expansion. The algorithm can be used in the periodic case to get both accurate approximations of the characteristic multipliers and the solution itself. In addition, it can be extended to deal with certain quasilinear delay equations.

• 1 publication
• 10 publications
• 1 publication
01/29/2020

### Approximation of Stieltjes ordinary differential equations

This work is devoted to the obtaining of a new numerical scheme based in...
04/08/2021

### Numerical methods and hypoexponential approximations for gamma distributed delay differential equations

Gamma distributed delay differential equations (DDEs) arise naturally in...
09/30/2022

### A ⋆-product solver with spectral accuracy for non-autonomous ordinary differential equations

A new method for solving non-autonomous ordinary differential equations ...
03/23/2022

### A practical guide to piecewise pseudospectral collocation for Floquet multipliers of delay equations in MATLAB

In recent years we provided numerical methods based on pseudospectral co...
05/09/2018

### Time Reversed Delay Differential Equation Based Modeling Of Journal Influence In An Emerging Area

A recent independent study resulted in a ranking system which ranked Ast...
11/17/2020

### On mathematical aspects of evolution of dislocation density in metallic materials

This paper deals with the solution of delay differential equations descr...
10/23/2019

### Sensitivity-based Heuristic for Guaranteed Global Optimization with Nonlinear Ordinary Differential Equations

We focus on interval algorithms for computing guaranteed enclosures of t...

## 1 Introduction

We are primarily interested in the numerical integration of a system of linear delay differential equations (DDEs) with a single discrete time delay of the form

 dxdt=A(t)x(t)+B(t)x(t−τ),t≥0 (1) x(t)=ϕ(t),−τ≤t≤0,

where , and are, in general, time-dependent matrices and is the initial function. Two cases of special interest arise: when and are constant, and when they are periodic of period . Our purpose is to present an algorithm that allows one to analyze the stability of the problem at in the periodic case and also get approximations to the solution of (1) for arbitrary in the general (not necessarily periodic) time dependent case. The formalism can be readily generalized to problems with distinct delays ,

 dxdt=A(t)x(t)+k∑j=1Bj(t)x(t−τj),t≥0 (2) x(t)=ϕ(t),−τ≤t≤0.

Moreover, the procedure is also extended to quasilinear delay equations of the form

 dxdt=A(x(t−τ))x(t),t≥0 (3) x(t)=ϕ(t),−τ≤t≤0.

Equations (1) and (2) appear frequently in applications, either as a model of some physical problem or as a tool to analyze its stability when the time evolution of the unknown variable depends not only on the actual state but also on its past values (see e.g. [10, 20] and references therein). On the other hand, equation (3) has been used to describe SIR-type epidemic models taking into account the latent period, i.e., the time when an individual is infected but is not infective [14].

Given the relevance of problems (1)-(3), many numerical procedures have been designed over the years for obtaining approximate solutions (see [5] and references therein). Among them, the approximation technique consisting in first converting the DDE (1) when and are constant into an abstract Cauchy problem [4, 22] is particularly appealing: essentially, it allows one to discretize the corresponding operator and then solve numerically the resulting system of ordinary differential equations (ODEs) by standard methods. This procedure has been used to analyze the stability of linear DDEs, both autonomous [10] and explicitly time-dependent [11], in combination with Chebyshev spectral collocation methods. It is called “continuous time approximation” in [11, 24] and, in particular, provides spectral accuracy in the determination of the characteristic roots of the system.

We pursue here the same strategy and combine it with the application of numerical integrators based on the Magnus expansion to carry out the time integration of the resulting non-autonomous system of ODEs. We show that this procedure provides more accurate approximations than those reported in [10] for the determination of the characteristic multipliers of (2) with the same number of discretization points. Even for the quasilinear case (3) it leads to higher order approximations to the solution than previous schemes also based on the Magnus expansion [14], whereas still preserving its qualitative properties.

The plan of the paper is the following. In section 2 we briefly summarize the continuous time approximation technique for dealing with linear DDEs, whereas the main features of the Magnus expansion and some numerical integrators based on it are reviewed in section 3. The time integration algorithm is illustrated on several numerical examples in section 4 and the technique is extended in section 5 to quasilinear problems, and in particular to an epidemic model with delay. Finally, section 6 contains some concluding remarks.

## 2 Continuous time approximation

### 2.1 Linear DDEs as abstract Cauchy problems

Let us denote by the state space of continuous functions , which is a Banach space with the norm [10]. If is the state at time , defined as [18]

 xt(θ)≡x(t+θ),θ∈[−τ,0], (4)

then the linear equation (1) in the autonomous case can be written as

 ˙x=Lxt,t∈R (5) x0=ϕ∈X,

where it is assumed that the initial time , the dot denotes the right-hand derivative and acts on as

 Lxt=Ax(t)+Bx(t−τ). (6)

In that case, for every , there exists a unique solution of (5) on , denoted by [10]:

 x(t;ϕ)=⎧⎪⎨⎪⎩ϕ(0)+∫t0Lxsds,t≥0ϕ(t),t∈[−τ,0].

It is then possible to apply the theory of one-parameter strongly continuous semigroups (also called -semigroups) in this setting. More specifically, it has been shown that the family of operators associating to the initial function the state at time , i.e, , defines a -semigroup of linear and bounded operators on whose infinitesimal generator is the linear unbounded operator given by [16]

 D(A)={x∈X:˙x∈X and ˙x(0)=Ax(0)+Bx(−τ)} (7) Ax=˙x,x∈D(A).

In this way, eq. (5) can be restated as the linear abstract Cauchy problem

 ˙U(t)=AU(t),t≥0 (8) U(0)=ϕ

on the Banach space , where and the function is the unique solution [5].

Typically, problem (5) (or (8)) cannot be solved analytically, so that one has to find a way to get approximations. One possible way consists in discretizing the operator . This can be done by introducing a mesh on the interval , with and . Then, the finite dimensional space is taken as the discretization of . An element can be seen as a function defined on the mesh, with , , being the value at . As a simple example, let us consider the equispaced mesh , with . Then, the -dimensional linear system of ODEs

 ˙UN(t)=ANUN(t),t≥0 (9) UN(0)=ϕN

with and

 AN=(A0d…0dBDN⊗Id)

approximates the original system (8). Here is a matrix corresponding to the particular finite difference scheme one chooses for the integration, is the identity matrix and

denotes the tensor product

[4, 5, 22]. For instance, if a first-order forward difference approximation is used, then

 DN=1h⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝−11−11⋱⋱−11−11⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠.

A much more accurate description can be achieved by considering instead a pseudospectral differentiation method based on Chebyshev collocation points. In this approach one takes the Chebyshev points

 tj=cosjπN,j=0,…,N

on the interval and the corresponding shifted points as the mesh in . Then,

 ϕN=(ϕ(θ0),ϕ(θ1),…,ϕ(θN))T∈Rd(N+1) (10)

and the matrix is obtained as follows. First, one considers the standard spectral differentiation matrix for the Chebyshev collocation points. The entries of can be found, e.g. in [25, p. 53]. Then one forms the matrix , and finally is formed from by replacing its first rows by zeros and replacing the left upper corner by and the right upper corner by . In other words,

 AN=(A0d…0dB2τ[D(d+1,d(N+1))]). (11)

Here denotes the submatrix obtained by taking the rows from , whereas the factor accounts for rescaling from the interval to [11]. In particular, if and , one has

where now and are scalars. In this way, one ends up with the -dimensional initial value problem defined by (9), with coefficient matrix (11) and initial condition (10). By solving this system in the interval

, one constructs a vector

whose entries constitute approximations to the solution of the DDE (5) at times , namely . Thus, the first components of provide the solution at , the next components approximate the solution at , etc.

If

is sufficiently large, by applying this procedure one gets in fact spectral accuracy. This is observed, in particular, when computing the eigenvalues of the approximate matrix

: as shown in [10], the eigenvalues of converge to the eigenvalues of the operator faster than for any . Of course, the number of elements in the spectrum of that are approximated by elements of the spectrum of increases with the value of , and the eigenvalues which are closest of the origin are better approximated. Of course, the solution itself can be obtained by evaluating .

If one is interested in obtaining approximate solutions of the DDE (5) for , then the procedure consists in taking successive intervals , , etc. and proceed as in the method of steps, taking as initial condition the approximation obtained in the previous interval. Thus, if we denote , to get approximations in the -th interval , , we have to solve (9) with the initial condition obtained by integrating on the previous -th interval. If, on the other hand, we want to get approximations at a particular time

, then an interpolation can be carried out based on the values obtained at the Chebyshev points in the interval.

### 2.2 Linear periodic DDEs

Whereas in the autonomous case, the infinitesimal generator approach allows one to get accurate approximations to the eigenvalues and the solution of (1

), and even obtain rigorous convergence estimates, the situation is far more complicated when matrices

and in (1) depend explicitly on time. One could analogously try to describe the time evolution of the linear DDE through a Cauchy problem defined for an abstract ODE, but in that case one has to introduce a 2-parameter family of operators called evolution family as solutions for abstract Cauchy problems of the form [13, Chapter 3]

 ˙y(t)=A(t)y(t),y(s)=z∈D(A(s)),t≥s, (12)

where the domain of the operator is assumed to be dense in . Specifically, a family of linear and bounded operators on is called an evolution family if

• and for all ; and

• for each , the function is continuous for .

The abstract Cauchy problem (12) is called well-posed if there exists an evolution family that solves (12), i.e., if for each , there exists a dense subset such that, for each the function , for , is differentiable, and (12) holds [13, 16].

It has been shown that the non-autonomous Cauchy problem (12) is well-posed if and only if there exists a unique evolution family solving (12) [16, 23]. If the well-posedness of the problem has been established (which in some cases is far from trivial [13, 16, 15]), then the unique solution has the form

 y(t)=U(t,s)z for t≥s.

In the particular case of periodic problem with period ,

 ˙x=L(t)xt,t≥s (13) xs=ϕ,

the evolution operator verifies in addition that for all , and it is the so-called monodromy operator what is the central object of study to determine the stability of the system. It can be shown that the spectrum of is an at most countable compact set of with zero as the only possible accumulation point. Moreover, any eigenvalue belongs to the point spectrum, and is called a characteristic multiplier of eq. (1) [18]. In particular, the zero solution of (13) is uniformly asymptotically stable if and only if all the characteristic multipliers are such that [10, 18]. In any event, the solution (corresponding to in (1)) has to be approximated at certain values of .

Whereas in [10] a pseudospectral collocation method is applied directly to discretize the monodromy operator and to compute approximations to the characteristic multipliers of (1), in this paper we proceed formally as in the case of autonomous problems. In other words, we take Chebyshev nodes in the interval , replace by the finite dimensional space and the abstract Cauchy problem by

 ˙UN(t)=AN(t)UN(t),t≥0 (14) UN(0)=ϕN,

where now

 AN(t)=(A(t)0d…0dB(t)2τ[D(d+1,d(N+1))]) (15)

is a periodic matrix of period and the initial vector is the discretization of the function at the shifted Chebyshev points,

 ϕN=(ϕ(θ0),ϕ(θ1),…,ϕ(θN))T∈Rd(N+1). (16)

This problem is then numerically integrated to get approximations to the solution over the interval , specifically at the times determined by the Chebyshev points. Depending on the value of we get approximations at more points in . Once the approximation at is obtained, then the monodromy operator is available, so that one readily determines the first characteristic multipliers of eq. (1) [11].

## 3 Numerical integration of linear DDEs

### 3.1 Numerical integrators based on the Magnus expansion

One is then confronted with the numerical time integration of the non-autonomous initial value problem (14)-(16) defined in the finite dimensional space . This of course can be done by applying several numerical integrators, such as Runge–Kutta or multistep methods. Among them, numerical methods based on the Magnus expansion are particularly appropriate for linear systems, resulting in very efficient schemes that, in addition, preserve important qualitative properties of the continuous system [21]. It makes sense, then, trying to combine the spectral accuracy provided by the Chebyshev collocation points with Magnus integrators to get accurate approximations.

Magnus’ approach to solve the general linear differential equation

 ˙y(t)=^A(t)y(t),y(0)=y0, (17)

where is a matrix and , consists in expressing the solution as an exponential , and determining the equation satisfied by . Specifically, it can be shown that

where are the Bernoulli numbers and , . Equation (18) is then solved by applying Picard fixed point iteration after integration. This results in an infinite series for ,

 Ω(t)=∞∑k=1Ωk(t), with Ωk(0)=0, (19)

whose terms are increasingly complex expressions involving time-ordered integrals of nested commutators of evaluated at different times [6, 1]. In particular,

 Ω1(t) =∫t0^A(t1)dt1, Ω2(t) =−12∫t0dt1∫t10dt2 [^A(t2),^A(t1)]. (20)

The expansion is guaranteed to converge at least for such that

 ∫tc0∥^A(s)∥2ds<π.

By appropriately truncating the series and approximating the integrals by suitable quadratures, it is then possible to design numerical integration schemes for approximating the solution of eq. (17) in a given interval [6, 21]. As usual, the integration interval is divided into steps such that (19) converges in each subinterval , , with and length . Then, an approximation to the solution of (17) of order is achieved by computing

 y(tk+1)≈yk+1=exp(Ω[2p](h))yk,k=0,1,…,M−1, (21)

where . Since the construction procedure is detailed in references [7, 6], here we only collect specific integration schemes of order .

#### Order 2.

A 2nd-order scheme is attained by approximating by the midpoint rule. It reads

 yk+1=exp(h^A(tk+h/2))yk,k=0,…,M−1 (22)

and is also known as the exponential midpoint rule.

#### Order 4.

If we take the 2 points Gauss–Legendre quadrature rule and compute

 A1=^A(tk+(12−√36)h),A2=^A(tk+(12+√36)h),

then the scheme

 Ω[4](h)=h2(A1+A2)−h2√312[A1,A2] (23) yk+1=exp(Ω[4](h))yk

renders an integration method of order 4 for equation (17).

#### Order 6.

By evaluating the matrix at the Gauss–Legendre collocation points

 A1=^A(tk+c1h),A2=^A(tk+c2h),A3=^A(tk+c3h),

with , , , we form the quantities

 α1=hA2,α2=√15h3(A3−A1),α3=10h3(A3−2A2+A1),

and finally

 C1=[α1,α2] (24) C2=−160[α1,2α3+C1] Ω[6](h)=α1+112α3+1240[−20α1−α3+C1,α2+C2] yk+1=exp(Ω[6](h))yk.

Then we end up with an integrator of order 6 requiring only the evaluation of at three different times and the computation of 3 commutators.

These schemes have shown to be more efficient than standard integrators such as Runge–Kutta methods, even when the evaluation of the exponential of a matrix is required at each step [7]. In this respect, we note that several algorithms have been recently proposed to reduce the computational cost of the exponential matrix [2], especially in the context of exponential integrators [3]. These will be incorporated into our procedure.

### 3.2 Integration algorithm

We have now all the tools required to formulate a practical algorithm for the numerical integration of the linear DDE (1) in the interval , for generic (sufficiently smooth) matrices , and an initial function . It can be summarized as follows:

1. Choose an integer and form the spectral differentiation matrix associated with the Chebyshev collocation points on the interval .

2. Construct the matrix of eq. (15).

3. Form the -dimensional vector , whose elements are the values of the initial function at the shifted Chebyshev collocation points on the interval .

4. Choose an integer , the step size and construct the numerical solution of the initial value problem (14)-(16) at the final time by applying one of the numerical integrators of section 3.1 obtained from the Magnus expansion.

5. The vector provides an approximation to of order of convergence , i.e.,

 ˜UN(Mh)=UN(τ)+O(h2p+1),p=1,2,3,

and thus also approximates the solution of eq. (1) at times , .

As pointed out before, if the solution has to be computed at times , the same process can be applied in any interval , , taking as initial condition the approximation at the end of the previous interval, . The matrix has to be formed again, of course, but the only difference with respect to the previous interval lies in the time-dependent part (i.e., in and ).

The procedure has thus three main ingredients: (a) the original DDE is reformulated as an abstract Cauchy problem in the Banach space ; (b) this problem is then discretized by a pseudo-spectral method, thus leading to an initial value problem defined in a finite-dimensional space , and (c) finally a Magnus integrator is applied to solve numerically the resulting finite-dimensional non-autonomous ordinary differential equation. In this sense, the algorithm thus combines the advantages of both spectral and Magnus methods.

Notice, in particular, that the procedure critically depends on the parameters (related to the discretization of the delayed part) and (leading to the step size

in the time numerical integration), and finally also on the order of accuracy of the Magnus integrator. These parameters can be chosen according with the required accuracy, and are clearly related. The situation closely resembles what happens when a Magnus integrator is used to integrate in time a partial differential equation previously discretized in space: one has to adjust

to provide an accuracy consistent with the scheme used for the space discretization.

In this respect it is worth noticing that the Magnus expansion provides the exact solution of the initial value problem (14)-(16) in the autonomous case (i.e., when the matrices and in (1) are constant). This feature can be used to choose the number of collocation points leading to the required accuracy in the general case simply by freezing at one particular time. Once a particular has been selected, the number of subdivisions and the order of the Magnus integrators can be fixed according with some specified tolerance. The order and the step size can even be changed from one particular interval to the next [8].

If the number of nodes of the spectral discretization is sufficiently large, then the algorithm has the usual properties exhibited by standard Magnus integrators applied to non-autonomous linear differential equations concerning stability, convergence and error propagation [6, 7, 21].

In the periodic case, one can use this procedure to compute the fundamental matrix of system (14) at time , and its eigenvalues. Then, they provide approximations to the first characteristic multipliers of the DDE (1). As in the autonomous case, the eigenvalues that are closest to its minimum value (in absolute value) are better approximated.

It is worth remarking that, whereas the previous algorithm only deals with one delay, it can be easily generalized to the case of multiple discrete delays, i.e., to solve numerically equation (2) by applying any of the alternatives proposed by [11]: either by fitting a single Chebyshev polynomial through the entire delay interval or by considering a different Chebyshev polynomial of different degree in every interval , , etc.

## 4 Numerical examples

Next we illustrate the previous algorithm in practice on several examples, both for computing the first characteristic multipliers of the problem and also for getting numerical approximations to the exact solution. For convenience, we denote the previous Magnus integrators of order 2, 4 and 6 as M2, M4 and M6, respectively.

#### Example 1: a scalar periodic equation.

As a first illustration we take the admittedly simple non-autonomous one-dimensional equation with -periodic coefficients and delay

 ˙x(t)=cos(t)x(t)−esint+costx(t−π2). (25)

It is taken from [10] and arises when linearizing the nonlinear autonomous DDE

 ˙z(t)=−log(z(t−π2))z(t). (26)

around its periodic solution . It is a simple exercise to check that is indeed a solution of (25), so that is a characteristic multiplier [10]. We can therefore check the accuracy of the algorithm by computing both the spectrum of the monodromy matrix and the numerical approximation to the exact solution by integrating eq. (25) with initial function .

The first test is devoted to check the accuracy in the determination of the characteristic multiplier . This is done by computing the monodromy matrix through the numerical integration of the matrix system

 ˙Y=AN(t)Y,Y(0)=I

associated with (14)-(15) until the final time , i.e., in the interval , with the previous Magnus integrators, and then by obtaining its eigenvalues . We then compute the difference between the first eigenvalue and as a function of (the number of subdivisions in each interval ) for different values of (the number of collocation points). In this way, we obtain the results collected in Figure 1, with (left diagram) and (right panel). The order of approximation of M2 and M4 is clearly visible in the figure, whereas the higher accuracy of M6 is only visible for sufficiently large and .

Notice that the main limiting factor for accuracy in the left panel is the small number of collocation points: doubling the value of , from 10 to 20, allows us to decrease the error by more than 5 orders of magnitude. In this respect, it is worth mentioning that with the procedure proposed in [10], collocation points are required to render similar errors as those achieved here with .

In our second experiment we check the accuracy in solving the problem (25) for long times by computing approximations to the exact solution. Specifically, we integrate until the much larger final time with M2, M4 and M6 for different values of and compute the mean error of the solution in the last interval ,

 E=1N+1N∑j=0|x(tj)−(UN)j|. (27)

Here , and are the approximations obtained with each integrator. The results are displayed in Figure 2 for (left) and (right) collocation points. The same notation as in Figure 1 has been used for each method. Here again, the higher order of M6 is already visible with only collocation points, and the quality of the approximation does not degrade even if long time integrations are considered. Notice the close similarity between the results exhibited in Figures 1 and 2.

#### Example 2: delayed Mathieu equation.

The DDE we consider next is

 ¨x(t)+(δ+εcost)x(t)=bx(t−τ), (28)

where , and are real parameters. The case in which the time delay is equal to the principal period has been thoroughly studied in the literature, especially with respect to its stability [20], and so we also fix here. Equation (28) includes both the effects of time delay and the presence of parametric forcing, and appears in relevant mechanical engineering problems (see [20] and references therein).

Equation (28) is transformed into a system of the form (1) with and matrices

 A(t)=(0 1−(δ+εcost) 0),B=([]ccb000),

to which the previous algorithm can be readily applied. In our first test we fix the parameters to the values , and . By using the Floquet technique proposed in [19], it can be seen that one of the characteristic multipliers is given by

 μex=0.22751840350292177638239482513+1.417175174215530683457881875737i

with 30 digits of accuracy. This is taken as the reference value to compare with our procedure. As in the previous example, we compute the corresponding monodromy matrix with Magnus integrators M2, M4 and M6 for different values of the discretization parameters and . We then determine its first eigenvalue and the error by fixing and several values of . The corresponding results are depicted in Figure 3 (in a log-log scale) when (left) and (right) as a function of . Again, the order of each scheme is clearly visible, with M6 providing the most accurate approximations for all the values of considered. Here also the number of collocation points determines the maximal accuracy one can get. For comparison, with the numerical technique proposed in [10] (based on the direct discretization of the evolution operator) one gets also round-off accuracy with , whereas the error achieved with is . Notice that, even with a time step as large as (or ), there are no stability issues in the results.

To check how the errors in the numerical solution evolve with time, for our next experiment we take as initial condition and integrate the initial value problem (14)-(16) until the final time with a fixed value for both discretization parameters, , . As a measure of the error we compute the quantity

 Ei=1N+1N−1∑j=0|x(tj)−(UN)2j+1|, with% tj∈[iτ,(i+1)τ]

in each subinterval, where the exact solution is taken as the output of the function NDSolve of Mathematica with a very stringent tolerance. Notice that, since

, the odd components of the vector

provide the approximations to the exact solution of (28) at the collocation points, whereas the elements approximate .

We observe that initially the errors achieved by M4 and M6 are similar to those committed by M2. This feature can be explained by the fact that the initial function does not satisfy the differential equation. After several subintervals, however, the accuracy actually improves and remains so for the integration interval considered.

As a final test, we illustrate how our numerical algorithm can also be used to determine with high accuracy the stability of a system modeled by equation (28), and in particular the stability boundaries in the parameter space, such as are presented in the stability chart of [20, Fig. 2.10]. To this end we fix and . Then it can be shown that there exists a value of for which the dominant multiplier is . Specifically, the spectral collocation method of [10] with provides . That this is not the correct value (up to machine accuracy) can be seen either by increasing with the same algorithm or by applying the technique of [19], resulting instead in . For this value of , the algorithm proposed here with and the 6th-order Magnus integrator with provides the characteristic multiplier with an error , in contrast with an error for the procedure of [10] also with the same number of collocation points .

## 5 Extension to quasilinear problems

The algorithm we have presented in the previous section is mainly addressed to linear DDEs, both autonomous and explicitly time-dependent. It turns out, however, that the same procedure can also be formally extended to the quasilinear delay equation (3), which we write again here for convenience:

 dxdt=A(x(t−τ))x(t),t≥0 (29) x(t)=ϕ(t),−τ≤t≤0.

This problem can be formally stated as

 ddt~u(t)=Q(t)~u(t),t≥0 ~u(0)=ϕ(0)

in terms of the function

 ~u(t)≡{ϕ(t),t∈[−τ,0]x(t),t∈[0,∞)

and the operator for [15]. By judiciously approximating and the integrals involved in the Magnus expansion, it is possible to devise a 2nd-order integrator with good preservation properties [14]. The analysis has been generalized in [15] to more general classes of problems where depends in a more involved way of -history . Achieving higher orders of convergence with this procedure, however, is problematic and in fact it is left as open in [15].

In the following, we show how the same procedure we have applied in the linear case can be generalized also in this setting, leading to higher order numerical integrators. As before, we discretize the initial function with Chebyshev collocation points in the interval and replace the abstract Cauchy problem with the -dimensional autonomous system

 ˙UN(t)=AN(UN)UN(t),t≥0 (30) UN(0)=ϕN

where

 AN(UN)=(A((UN)N+1)0d…0d02τ[D(d+1,d(N+1))]). (31)

In other words, we replace in (15) by the new , and

by the zero matrix. Observe that, since the matrix

in (29) only depends on , then the dependence of the matrix in the discretized system (30) comes only through the last components of the vector , corresponding to the -th collocation point. This is the meaning of the notation in (31).

Next, the nonlinear initial value problem (31) is solved with exponential integrators based on a generalized Magnus expansion. In fact, it is shown in [12] how methods up to order four in this class can be obtained for nonlinear equations of the form

 ˙y=^A(t,y)y,y(0)=y0. (32)

As in the linear case, the starting point is to represent the solution in the form . Then, can be obtained by Picard’s iteration as

so that for sufficiently small values of . If terms up to are kept in (33), then

 Ω[m](t)=Ω(t)+O(tm+1),

so that by appropriate quadratures, it is then possible to get consistent approximations for the first values of . In particular, one can construct the following schemes of order 2 and 3, whereas the explicit algorithm for the method of order 4 can be found in [12, 17]. For simplicity, we only consider the autonomous case in (32).

#### Order 2.

Applying the trapezoidal rule to , one gets

 u=h^A(yk) (34) v=12(u+h^A(euyk)) yk+1=evyk

#### Order 3.

Now one has to approximate the integrals appearing in , , and . The minimum number of evaluations of and commutators is achieved by the following sequence:

 Q1=h^A(yk) (35) Q2=h^A(e12Q1yk)−Q1 u1=12Q1+14Q2 u2=Q1+Q2 Q3=−u2+h^A(eu1yk) Q4=−u2−Q2+h^A(eu2yk) u3=u2+23Q3+16Q4−16[Q1,Q2] yk+1=eu3uk

Notice that these Magnus integrators require more computational effort than the corresponding schemes for the linear case.

In the following we check the whole procedure on two additional numerical examples. As before, the different integrators are denoted by M2, M3 and M4, respectively.

#### Example 3: a scalar nonlinear equation.

The scalar equation (26) constitutes a particularly simple example of a quasilinear DDE of type (3) where the previous algorithm can be checked, since the exact solution is known explicitly, namely . To do that, we integrate the resulting -dimensional system (30)-(31) from a discretization based on Chebyshev collocation points (since here) until the final time . As in Example 1, we compute the main error in the interval as a function of for and to check that the Magnus integrators provide indeed the prescribed order. In this way we get Figure 5. Notice that, whereas the order of M2 is clearly visible, the accuracy of M4 only manifests itself for a sufficiently large number of Chebyshev points and very small step sizes. Otherwise, the results provided by M3 and M4 are quite similar.

#### Example 4: a SIR model with delay.

In reference [14], a 2nd-order numerical integrator also based on the Magnus expansion is proposed an tested on a widely used delayed SIR model (see [14] and references therein), namely

 ˙S(t)=−βS(t)I(t−τ)1+αI(t−τ) (36) ˙I(t)=βS(t)I(t−τ)1+αI(t−τ)−γI(t) ˙R(t)=γI(t).

It describes an epidemic model where the variation in the number of susceptible, , and infected, , individuals depend not only on the actual values of and respectively, but also on how many infected individuals they interacted a latent period before, i.e., at . In (36), and denote the infection and recovery rates, respectively, whereas if there is only a bilinear incidence rate, and if the model incorporates a saturated incidence rate [14].

Notice that system (36) can be expressed as eq. (3) with and

 A(x(t−τ))=⎛⎜⎝−q(I(t−τ))00q(I(t−τ))−γ00γ0⎞⎟⎠, (37)

where . As in [14], we take , , , latent period , initial values , , and initial function

 ϕ(t)=I0−12t,

for which the effect of the latent period is most evident. Then we integrate numerically with our algorithm until the final time and compute the relative error

 εr=∥x(tf)−xn(tf)∥∥x(tf)∥,

where the ‘exact’ solution is obtained with the Matlab built-in function dde23 with relative tolerance and refers to the numerical solution computed with Magnus integrators (as appropriately extracted from the whole vector ). In view of the close similarity of the results achieved by M3 and M4 in the previous example, only M2 and M3 are tested here. The results we achieve are shown in Figure 6.

Here again the order of each integrator is clearly visible, with M3 leading to high accuracy: with (corresponding to ) the relative error is approximately . Now we get errors smaller than with the largest possible value of the time step size, namely .

Finally, in Figure 7 we show the relative error as a function of time when the integration is carried out in the interval with the same values for the parameters, initial function

 ϕ(t)=I0+12t, (38)

collocation points and . In this case, no secular growth in the error is observed, with M3 providing more accurate results.

#### Preservation of properties.

Since model (36) deals with a population, it is clear that , and are non-negative for all , and moreover is constant. This feature is connected with the special structure the corresponding matrix possesses in this case: in fact, is a graph Laplacian, characterized by the following properties [9]:

• its elements verify that for , , for ;

• for .

It is then natural to analyze whether the corresponding approximations obtained by our numerical algorithm also verify these properties.

One can hardly expect that the matrix in (30) inherits the special structure may have in general, due to the presence of the differentiation matrix . One should take into consideration, however, the following key observations:

• the first rows of