DeepAI

# A second-order Magnus-type integrator for evolution equations with delay

We rewrite abstract delay equations to nonautonomous abstract Cauchy problems allowing us to introduce a Magnus-type integrator for the former. We prove the second-order convergence of the obtained Magnus-type integrator. We also show that if the differential operators involved admit a common invariant set for their generated semigroups, then the Magnus-type integrator will respect this invariant set as well, allowing for much weaker assumptions to obtain the desired convergence. As an illustrative example we consider a space-dependent epidemic model with latent period and diffusion.

• 2 publications
• 1 publication
06/16/2021

02/18/2021

### Second-order and nonuniform time-stepping schemes for time fractional evolution equations with time-space dependent coefficients

The numerical analysis of time fractional evolution equations with the s...
12/21/2021

### The truncated θ-Milstein method for nonautonomous and highly nonlinear stochastic differential delay equations

This paper focuses on the strong convergence of the truncated θ-Milstein...
05/03/2021

### Abstract clones for abstract syntax

We give a formal treatment of simple type theories, such as the simply-t...
10/03/2020

### Second Order Operators in the NASA Astrophysics Data System

Second Order Operators (SOOs) are database functions which form secondar...
08/22/2022

### Quantifying deviations from structural assumptions in the analysis of nonstationary function-valued processes: a general framework

We present a general theory to quantify the uncertainty from imposing st...

## 1 Introduction

The aim of this paper is to adapt the Magnus integrator for nonautonomous homogeneous problems to a wide class of nonautonomous delay problems. We are interested in problems of the form

 (QDEφ)

where is a Banach space, denotes the -history of the solution at time (i.e., the initial condition could also be written as ), where is an unbounded operator on and is bounded for all , and actually only depends on the restriction to for some fixed . The exact assumptions on the operators and functions involved will be detailed later.
Note that the case corresponds to a quasilinear delay equation with bounded operators, while leads to the unbounded case.

In [15], Magnus set out to solve the nonautonomous homogeneous problem (), , where and are bounded linear operators on appropriate spaces. In the case when all of the operators commute, the solution takes the simple form . In the general, noncommutative case, however, one has to correct the exponent, and the exact solution is given by the Magnus series expansion, involving integrals of commutators. The first term of this expansion corresponds to the commutative solution, and is a good approximant leading to a second-order numerical method using the midpoint rule to approximate the integral in the exponent:

 ˆY(τ)n+1=eτA((n+1/2)τ)ˆY(τ)n,n∈N (1)

where is an arbitrary timestep and denotes the numerical approximation to with initial value .
Convergence of the classical Magnus integrators, such as (1), has been widely studied in the literature for nonautonomous problems without delay. For finite dimensional spaces, Blanes et al. analysed the Magnus expansion, being the basis of the Magnus integrators, in [3] and [4]. Moan and Niesen gave a condition on the Magnus expansion’s convergence in [16]. Casas and Iserles investigated the expansion in case of nonlinear equations in [5]. Csomós derived a Magnus-type integrator (see (3) below) for delay equations and showed its second-order convergence and positivity preserving property in [6].
For infinite dimensional undelayed problems with inhomogeneity, González et al. derived a numerical method by using the Magnus expansion, and showed its second-order convergence in [9] for sufficiently smooth solutions, in addition to (lower order) error bounds measured in the domain. Bátkai and Sikolya proved the first-order (operator) norm convergence of the same method under very mild conditions for homogeneous problems in [2].

Delay equations describe processes where the time evolution of the unknown function not only depends on the actual state of the system but also on its past values. Delay differential equations find more and more frequent application in the modelling of scientific, financial, or even social phenomena, since there a delay term often naturally appears among the processes. One may here think of the role of the latent period when modelling the spread of epidemic diseases, the pregnancy period in population models, or the reaction time in any social or financial model. In our problem (QDE), the time parameters delimit the time window into the past that is influencing the present dynamics in the problem.

Since the exact solution of problem (QDE) is difficult or even impossible to compute analytically, one needs to find a way to approximate it. To this end, we first reformulate the delay problem as a nonautonomous problem, and then present a novel Magnus-type integrator based on Magnus integrators introduced for nonautonomous problems.

For the reformulation as a nonautonomous problem, we first define the function as

 ˜u(t):={φ(t),t∈[−δ,0],u(t),t∈[0,∞),

and the operators

 A(t):=Q(F(t˜u)) for all t≥0.

Then the function satisfies the problem

 (2)

Despite the operators depending on the unknown function via , the minimum delay within the term stemming from allows problem (2) to be treated as a nonautonomous Cauchy problem a posteriori. Indeed, for , is actually determined as it only depends on the known restriction . Hence solving the problem iteratively on the sub-intervals , , yields an explicit nonautonomous Cauchy problem for each time-segment considered, provided well-posedness is maintained along the way. Hence, we will consider problem (2) essentially as a formally nonautonomous problem on the whole time interval. The solution can then be approximated by the method derived by Magnus in [15] for such problems. This method however uses the values , which have to be themselves approximated, involving an appropriate second-order approximation of based off of a grid that is compatible with the one used for .

To this end, one defines an arbitrary , takes the time step , and denotes the approximate value of by for all . The Magnus-type integrator, which we will derive in detail in Section 3, takes then the form

 u(τ)n+1/2 (3) u(τ)n+1 :=eτQ(∑⌊δ−ϵτ⌋ℓ=0κℓ,τFℓ,τ(u(τ)n+ℓ+1/2))u(τ)n

for , where possible negative indices refer to the corresponding values of the history function, i.e., for , the half-indexed terms are auxiliary values (essentially approximating for an application of the midpoint rule), and the expressions stem from the approximation of .

Under appropriate smoothness and uniform exponential bounds on the generators involved, we will prove the second-order convergence of the Magnus-type integrator (3) when applied to the abstract delay equation (QDE) on Banach spaces. Moreover, we show that the method inherits the invariance properties of the generators (e.g. positivity).

The paper is organised as follows. In Section 2 we introduce the abstract setting of evolution equations like (2). In Section 3 we recall the original Magnus integrators, summarise the main results from the literature regarding its application to nonautonomous Cauchy problems (2), and then describe how we adapt this method to our case where we have no a priori knowledge of the operators on the time interval .
Section 4 contains our main result, Theorem 23 on the Magnus-type integrator’s second-order convergence when applied to the quasilinear delay equation (QDE). Some of the assumptions needed for this convergence (or even the existence of the solution to the delay equation) – especially a uniform exponential bound for the semigroups generated by the operators – are typically not naturally achievable for all . However, many problems admit invariants and have qualitative preservation features (e.g., positivity of the solutions), and we shall show that our Magnus-type integrator naturally exploits such invariants, allowing us to restrict our assumptions to some smaller, invariant closed subset of that we assume our initial history runs within. This will allow us to prove in parallel that the Magnus-type integrator and the exact solution both exist for a positive amount of time (namely, ), with the former never leaving the invariant set. That in term will imply the second-order convergence on this time interval, implying that the solution itself stays in the invariant set as well. Iterating these arguments, we obtain our results for any compact time interval.
In Section 5 we use a space-dependent epidemic model to illustrate the power of invariants (in this case both total population size and positivity) in ensuring second-order convergence.

## 2 Autonomous and nonautonomous evolution equations

In this section we introduce the notions necessary to understand the Magnus integrator and our approach to the error bounds. Throughout we shall assume that is a Banach space. Our main references are Engel and Nagel [8] and Nickel [17].

###### Definition 1.

A family of bounded linear operators on is said to be a strongly continuous semigroup generated by the linear, closed, and densely defined operator if the following holds:

1. , the identity operator in ,

2. for all ,

3. the function is continuous for all ,

4. there exists for all .

This strongly continuous semigroup describes the solution to the Abstract Cauchy Problem

 (ACP)x

in the sense . It is known (see e.g. [8, Prop. I.5.5]) that such semigroups are exponentially bounded, i.e., there exist and such that for all .

In many processes, however, one cannot assume the generator in to be constant in time, leading to so-called nonautonomous Cauchy problems, or for short. Let be a linear operator on for every . Furthermore, let and be given. Then we consider the following nonautonomous problem for the differentiable unknown function :

 (NCP)s,x

The following definitions are based on [8, Section VI.9].

###### Definition 2.

A continuous function is called a (classical) solution of if , for all , , and for all .

###### Definition 3.

For a family of linear operators on the Banach space , the nonautonomous Cauchy problem is called well-posed with regularity subspaces if the following holds.

1. Existence: For all the subspace

 Ys:={y∈X:there exists a solution ˆus(⋅,y) for (NCP)s,y}⊂D

is dense in .

2. Uniqueness: For every the solution is unique.

3. Continuous dependence: The solution depends continuously on and , i. e., if , with then we have uniformly for in compact subsets of , where

 ¯¯¯ur(t;y)={ˆut(t,y)%ifr≤t,yifr>t.

If, in addition, there exist constants and such that

 ∥ˆus(t,y)∥≤Meω(t−s)∥y∥

for all and , then is called well-posed with exponentially bounded solutions.

###### Definition 4.

A family of linear, bounded operators on a Banach space is called an (exponentially bounded) evolution family if

1. for all ,

2. the map is strongly continuous,

3. for some , and all .

An evolution family is said to be contractive if we can choose and , and quasi-contractive if we can choose for some .

###### Definition 5.

An evolution family is called an evolution family solving , if for every the regularity subspace

 Ys:={y∈X:[s,∞)∋t↦U(t,s)y solves (NCP)s,y}

is dense in .

We have the following result connecting the well-posedness of to the existence of a unique evolution family solving it.

###### Theorem 6 ([17, Prop. 2.5]).

Let be a Banach space, a family of linear operators on . Then the nonautonomous Cauchy problem is well-posed if and only if there exists a unique evolution family solving .

As mentioned in the Introduction, our goal is to rephrase (QDE) as a nonautonomous Cauchy problem on the time interval . Of great help in establishing well-posedness is the following consequence of a result by Kato ([12, Thm. 4]).

###### Theorem 7.

Let be a closed interval, and a family of generators of contraction semigroups on the Banach space with common domain satisfying for all . Then the nonautonomous Cauchy problem is well-posed with regularity subspaces () and admits a contractive evolution family . In particular, for any and the initial condition leads to a unique (classical) solution with for all .

###### Remark 8.

The contractivity of the evolution family is actually part of the earlier Theorem 2 in the same paper.
Also, note that Kato’s result talks of evolution families and well-posedness on a possibly bounded closed interval instead of , and it is not immediately clear how the two can be connected. Let with . Since each () is a generator, it makes sense to extend to by setting when and when . Then the evolution family corresponding to the extended interval can obviously be restricted to with all the properties preserved. But also the evolution family corresponding to the restricted problem has a natural extension to as follows.
Denote by the contraction semigroup generated by , and by the contraction semigroup generated by . If , then let and if , then let . This defines the evolution family for all in a way compatible with Definition 4 ( may have to be replaced with ).

An easy rescaling argument yields that the same holds if instead of generators of contraction semigroups we consider a family such that the generated semigroups are uniformly quasi-contractive.

###### Corollary 9.

Let be a closed interval, and a family of generators of uniformly quasi-contractive semigroups on the Banach space with common domain satisfying for all . Then the nonautonomous Cauchy problem is well-posed with regularity subspaces () and admits a quasi-contractive evolution family . In particular, for any and the initial condition leads to a unique (classical) solution with for all .

Hence, the unique solution to has the form

 ˆu(t)=U(t,s)x (4)

for all . Our aim is to approximate the solution to problem (QDE), rewritten as the nonautonomous problem (2), at certain time levels. To do this we introduce the Magnus-type integrator in the next section.

## 3 Magnus-type integrator

We saw in the Introduction that the delay equation (QDE) could formally be written as the nonautonomous abstract Cauchy problem with the solution-dependent operator . Whilst well-posed autonomous abstract Cauchy problems have their solutions given through of a one-parameter strongly continuous semigroup, problem – if well-posed and is in the regularity subspace – has its solution given through a unique two-parameter evolution family (cf. Theorem 6). Since the exact form of the evolution family is usually unknown, we need to approximate the solution in (4). To this end we define an arbitrary and take the time step . Then the approximate value of at time levels , , is denoted by .

In case of a finite dimensional space , the evolution family is the exponential of a bounded operator for . Magnus showed in [15] that the bounded operator could be expressed by the integral of an infinite series, called Magnus series. Casas and Iserles showed in [5] that the appropriate truncations of the series led to convergent approximations. The further approximation of the integral terms yields the Magnus-type integrators. More precisely, by the property of the evolution family we have

 ˆu((n+1)τ)=U((n+1)τ,s)x=U((n+1)τ,nτ)U(nτ,s)x=U((n+1)τ,nτ)ˆu(nτ)

for all . By approximating by as in [5], and then by with the midpoint quadrature rule, we arrive at the formula of the simplest Magnus integrator

 ˆu(τ)n+1=eτA((n+1/2)τ)ˆu(τ)n (5)

for all with .

In case of an infinite dimensional Banach space , we consider formally the same formula (5) where the exponential refers to the strongly continuous semigroup generated by the corresponding operator (cf. Definition 1).

Since the Magnus integrator (5) gives only an approximation to the exact solution at time for all , it is necessary to show that the approximate value converges to the exact value as the time step tends to zero, or equivalently, tends to infinity. As is usual, we will want to show that our numerical scheme yields a good approximation on any compact time interval .

###### Definition 10.

Let denote the exact solution to problem (QDE). Its approximation (or, equivalently, the corresponding numerical method) is called convergent of order , if there exists such that holds for all and with , where the constant is independent of and but may depend on .

We note that we defined the convergence for the sequence of the time steps in order to simplify our proofs, which could be done for all sequences and with by introducing a more complicated formalism. Since one has an initial history function (or a set of data) on the time interval , it is natural to choose a time step that is compatible with it.

In what follows we present two results from the literature about the Magnus integrator (5) for nonautonomous problems, since we will use them in our analysis.

###### Theorem 11 ([9, Thm. 2]).

Let and be Banach spaces with densely embedded in . We suppose that the closed linear operator is uniformly sectorial for . Moreover, we assume that the graph norm of and the norm in are equivalent. We also assume that , and in particular there then exists a constant such that

 ∥A(t)−A(s)∥L(D,X)≤LA(t−s)

holds for all . Moreover, we introduce the notations

 gn(t) =(A(t)−A((n+1/2)τ))u(t),t∈[nτ,(n+1)τ], (6) ∥gn∥X,∞ =\operatornamewithlimitsmax{∥gn(t)∥X:t∈[nτ,(n+1)τ]}, ∥g∥X,∞ =\operatornamewithlimitsmax{∥gn∥X,∞:n∈N,(n+1)τ∈[0,T]},

and corresponding notations will also be used with instead of . Then the Magnus integrator (5) applied to problem is convergent of second order, that is, there exists a constant , being independent of and

, such that the following estimate holds for the global error:

 ∥u(nτ)−ˆu(τ)n∥≤Cτ2(∥g′∥D,∞+∥g′′∥X,∞) (7)

for all , provided that the quantities on the right-hand side are well-defined.

The following theorem is essentially the quasi-contractive, continuously differentiable special case of the consistency result from the proof of [2, Thm. 3.2], more specifically inequality (5) therein. It hinges on the fact that Corollary 9 implies that the well-posedness, stability and local Hölder continuity conditions of that theorem are automatically satisfied.

###### Theorem 12.

We consider the problem on the Banach space and suppose the following.

1. There exists a such that for all and . Further, , where is the generator of a strongly continuous semigroup, and for all .

2. The map is continuously differentiable as a map from to for some .

Then for all with we have the following estimate:

 ∥U(s+h,s)−eτA(s+h/2)∥≤La,bechh2, (8)

where is the Lipschitz constant of on .

To illustrate the process of obtaining our Magnus-type integrator, we shall first present a special case that already exhibits some of the new ideas involved, and then we indicate what further changes are needed to accommodate for the general case.

###### Example 13.

In this example we shall focus on the special case , i.e., when the delay is concentrated on a specific point of the history function. Our equation (QDE) then takes the form

 (QDE’φ)

In this particular case, we have in (2). Recall that for , however, its value is unknown for . Thus, we need to approximate in the Magnus integrator (5) to obtain a working method. Now the natural idea would be to use the appropriate , however falls exactly between two points of our time grid, and has to be obtained via further approximation. We therefore introduce the corresponding term as an auxiliary value, obtained via another Magnus step with half time step. In full detail, we have the following.

###### Definition.

Let be an arbitrary integer and . Then the Magnus-type integrator for point-delay, which yields an approximation to the solution of the point-delay equation (QDE’) at time levels is given as follows. We introduce the notation for . Then for , we have the recursion

 u(τ)n+1/2 :={φ((n+1/2)τ−δ)for%  n=0,1,…,N−1,eτ2Q(u(τ)n−2N)u(τ)n−Nfor n≥N, (9) u(τ)n+1 :=eτQ(u(τ)n+1/2)u(τ)n.

Note that since they are essentially auxiliary values, the way we indexed the terms is not indicative of the time layer they correspond to (which would actually be ). Rather, the indices reflect the natural order in which one would execute the algorithm, i.e.,

 …,u(τ)n;u(τ)n+1/2,u(τ)n+1;u(τ)n+3/2,u(τ)n+2;u(τ)n+5/2…,

despite being able to compute already at earlier stages.

In the general case of the quasilinear delay evolution equation (QDE), we have in (2). If we want to apply the Magnus integrator (5), we need to be able to approximate . Even for , all we have available is an approximation to some of its values, corresponding to the time levels in our grid. Thus we first need a discretisation of itself using only discrete values of , and then use the approximation of those values in the final form of our method. The simplest approach is to make the discretisation of compatible with the original time grid, and use the same auxiliary Magnus step as in the previous Example 13.

To this end we introduce the approximation of in the following form

 F(ξ)≈⌊δ−ϵτ⌋∑ℓ=0κℓ,τFℓ,τ(ξ(−δ+ℓτ)) (10)

for appropriate elements , weights , and functions having properties to be detailed in Section 4. We rewrite now (QDE) as a nonautonomous problem (2) with , apply the Magnus integrator (5), and approximate as in (10) to obtain for all :

 u((n+1)τ)≈eτA((n+1/2)τ)u(nτ)=eτQ(F((n+1/2)τu))u(nτ) (11) ≈eτQ(∑⌊δ−ϵτ⌋ℓ=0κℓ,τFℓ,τ((n+1/2)τu(−δ+ℓτ)))u(nτ)=eτQ(∑⌊δ−ϵτ⌋ℓ=0κℓ,τFℓ,τ(u((n+1/2)τ−δ+ℓτ)))u(nτ),

where we used the definition of the history function in the last step. We approximate next the intermediate values using another Magnus step with time step but now by taking the left-rectangle rule when approximating the integral in the exponent, cf. [5]:

 u((n+ℓ)τ−δ+τ/2)≈eτ2A((n+ℓ)τ−δ)u((n+ℓ)τ−δ)=eτ2Q(F((n+ℓ)τ−δu))u((n+ℓ)τ−δ) (12) ≈eτ2Q(∑⌊δ−ϵτ⌋k=0κk,τFk,τ(u((n+ℓ)τ−δ−δ+kτ)))u((n+ℓ)τ−δ)

for all with . Observe that formula (12) can be reindexed by using instead of . However, since is not defined for negative times, whenever , the values at the corresponding intermediate time levels should be obtained from the initial history function instead. By combining (11) and (12), we thus obtain the following definition.

###### Definition 14.

Let be an arbitrary integer and . Then the Magnus-type integrator, which yields an approximation to the solution of the delay equation (QDE) at time levels is given as follows. As before, we use the notation for . Then for , we have the recursion

 u(τ)n+1/2 (13) u(τ)n+1 :=eτQ(∑⌊δ−ϵτ⌋ℓ=0κℓ,τFℓ,τ(u(τ)n+ℓ+1/2))u(τ)n,

where the exact properties of the weights and of the functions will be presented in Section 4.

###### Remark 15.

To fit Example 13 in this general formulation, set and for , with the identity for all .

###### Example 16.

Let us consider a delay term where a fixed delay time interval uniformly governs the dynamics. More specifically, we shall take a closer look at the delay when

 F(ξ)=2δ∫−δ/2−δξ(s)ds,

i.e, we have

 Q(2δ∫−δ/2−δu(t+s)ds)withδ>0

in (QDE).

The first thing to note is that whenever is divisible by , the endpoint

of the integral also falls on the discretisation grid, significantly simplifying things, and the odd

’s have to be treated slightly differently to fit the general framework. Alternatively, we could simply adapt the general scheme to this special situation by only considering even values for , but we shall detail the odd case nevertheless.
In contrast to the point-interaction delay in Example 13 where we could directly substitute the computed values into the delay term, we here have an integral that itself has to be numerically approximated using some appropriate quadrature. Taking into consideration the order of the error that we need to achieve for the recursive inequalities to result in a second-order Magnus-type integrator, the error of quadrature has to be of the magnitude of .

So for even , we use the composite trapezoidal rule with nodes for as

 2δ∫−δ/2−δu(t+s)ds≈1N⎛⎝u(t−δ)+2N/2−1∑ℓ=1u(t−δ+ℓτ)+u(t−δ/2)⎞⎠.

The Magnus-type integrator thus has the form for :

 u(τ)n+1/2 :=⎧⎪ ⎪⎨⎪ ⎪⎩φ((n+1/2)τ−δ),n=0,…,N−1,eτ2Q(1N(u(τ)n−2N+u(τ)n−2N+N/2+2N/2−1∑ℓ=1u(τ)n−2N+ℓ))u(τ)n−N,n≥N, u(τ)n+1

It has the form (13) with weights and for where each is the identity. We remark that the weights sum up to (cf. (14) later on).
For odd values of , the point is not part of the grid, so we have to use the truncated approximation

instead, again with error of order , i.e., and for .

It is reasonable to wonder what happens if the delays above are not given in the convenient form where we are looking at a convex combination/normalised integral, for instance, if we were dealing with a delay of the form . Actually, this is not really an issue, as we may then define and thereby revert to the convex combination case detailed above. This is only a question of convenience, related to the interpretation of the invariance of the set . After rescaling should stay invariant under the semigroups generated by the for all , whereas without rescaling the invariance is under the ones generated by with .

For finite dimensional spaces , Csomós showed in [6] that the Magnus-type integrator (9) is convergent of second order. Our present aim is to generalise this result to any Banach space and to general for the Magnus-type integrator (13).

## 4 Convergence

This section contains our main result regarding the second-order convergence of Magnus-type integrator (13) applied to the quasilinear delay equation (QDE). We have already seen how problem (QDE) can formally be written as the nonautonomous problem . Hence, Definition 10 of the convergent approximation remains valid also for the Magnus-type integrator (13) applied to the delay problem (QDE). From now on we use the notations and for the left and right derivatives of a function at zero, respectively.
We will need the following list of assumptions.

###### Assumptions.

Let be a Banach space, a closed subset and a dense subspace.

1. We have for all , where , generates a contraction semigroup on , , and the operators () are all dissipative. We shall also assume , which is no real added restriction as we may simply replace by and by for some .

2. The function is continuously differentiable on the set .

3. The function is twice continuously differentiable on the set .

4. For any the operator leaves invariant, and is bounded and dissipative on , where . In addition, is continuously differentiable on .

5. The operator is a sectorial operator generating an (analytic) contraction semigroup with sector for some , and the operators () are all such that for any the operator is dissipative.

6. with whenever .

7. with whenever .

8. The functions and weights () are such that there exists a constant independent of such that

 ⌊δ−ϵτ⌋∑ℓ=0∣∣κℓ,τ∣∣≤L (14)

and the function

 Fτ(ξ):=⌊δ−ϵτ⌋∑ℓ=0κℓ,τFℓ,τ(ξ(−δ+ℓτ)) (15)

satisfies

 ∥F(ξ)−Fτ(ξ)∥X≤C∥ξ∥C2([−δ,0],X)τ2 (16)

for some constant independent of . In addition, there exists a constant such that and are Lipschitz with constant for all and .

9. The set satisfies

 [v∈(C([−δ,0],X)) and v([−δ,−ϵ])⊂W]⇒F(v)∈W

and

 ⌊δ−ϵτ⌋∑ℓ=0κℓ,τFℓ,τ(W)⊂W (17)

for all .

10. The initial history function is in and satisfies for all and the boundary conditions

 ˙φ−(0) =Q(F(φ))φ(0), (18) ¨φ−(0) =(˜Q′(F(φ))F′(φ)˙φ)φ(0)+Q(F(φ))2φ(0). (19)
###### Remark 17.

The condition (17) is automatically satisfied when is convex, and and for all and .

The following results will show that under appropriate smoothness assumptions on and the initial history function , the solution itself will exhibit similar smoothness properties on and . Also, we shall show that Theorem 11 is applicable to our setting.

The next two results show that our Assumptions imply that the semigroups generated by the operators () are uniformly quasi-contractive, and uniformly sectorial as well.

###### Lemma 18.

Under Assumption (a) each operator () with domain is the generator of a contraction semigroup on . In particular, the semigroups generated by are uniformly quasi-contractive, that is, for all and .

###### Proof.

The operators are by assumption dissipative, and with -bound 0 (as they are bounded operators). Thus the claims follow directly from [8, Thm. III.2.7] and the usual rescaling argument. ∎

###### Lemma 19.

Under Assumptions (a) and (d) each operator () with domain is the generator of an analytic contraction semigroup on . In particular, the uniform sectoriality required in Theorem 11 is satisfied by the family .

###### Proof.

The assumptions imply that for any the operator generates a contraction semigroup and is dissipative with -bound 0, hence each generates a contraction semigroup as well by [8, Thm. III.2.7]. To see that the semigroup generated by is analytic on , we use the semigroup property and the fact that [8, Thm. III.2.14] implies that generates an analytic semigroup on some sector