DeepAI

# The Galerkin analysis for the random periodic solution of semilinear stochastic evolution equations

In this paper, we study the numerical method for approximating the random periodic solution of semiliear stochastic evolution equations. The main challenge lies in proving a convergence over an infinite time horizon while simulating infinite-dimensional objects. We propose a Galerkin-type exponential integrator scheme and establish its convergence rate of the strong error to the mild solution.

• 101 publications
• 3 publications
05/18/2021

### The BDF2-Maruyama Scheme for Stochastic Evolution Equations with Monotone Drift

We study the numerical approximation of stochastic evolution equations w...
08/25/2019

### Convergence rate for Galerkin approximation of the stochastic Allen-Cahn equations on 2D torus

In this paper we discuss the convergence rate for Galerkin approximation...
07/14/2018

### Generalization in quasi-periodic environments

By and large the behavior of stochastic gradient is regarded as a challe...
04/25/2020

### Design and convergence analysis of numerical methods for stochastic evolution equations with Leray-Lions operator

The gradient discretisation method (GDM) is a generic framework, coverin...
09/03/2020

### Surrounding the solution of a Linear System of Equations from all sides

Suppose A ∈ℝ^n × n is invertible and we are looking for the solution of ...
10/29/2020

### Infinite Time Solutions of Numerical Schemes for Advection Problems

This paper addresses the question whether there are numerical schemes fo...
08/28/2021

### Diffraction of acoustic waves by a wedge of point scatterers

This article considers the problem of diffraction by a wedge consisting ...

## 1 Introduction

The random periodic solution is a new concept to characterize the presence of random periodicity in the long run of some stochastic systems. On its first appearance in [21], the authors gave the definition of the random periodic solutions of random dynamical systems and showed the existence of such periodic solutions for a perfect cocycle on a cylinder. This is followed by another seminal paper [9]

, where the authors not only defined the random periodic solutions for semiflows but also provided a general framework for its existence. Namely, instead of following the traditional geometric method of establishing the Poincaré mapping, a new analytical method for coupled infinite horizon forward-backward integral equations was introduced. This pioneering study boosts a series of work, including the existence of random periodic solutions to stochastic partial differential equations (SPDEs)

[4], the existence of anticipating random periodic solutions [6, 7], periodic measures [8], etc.

Let us recall the definition of the random periodic solution for stochastic semi-flows given in [9]. Let be a separable Banach space. Denote by a metric dynamical system and is assumed to be measurably invertible for all . Denote . Consider a stochastic semi-flow , which satisfies the following standard condition

 u(t,r,ω)=u(t,s,ω)∘u(s,r,ω),  for all r≤s≤t, r,s,t∈R, for a.e. ω∈Ω. (1)

We do not assume the map to be invertible for .

###### Definition 1.1.

A random periodic path of period of the semi-flow is an -measurable map such that

 {u(t,s,y(s,ω),ω)=y(t,ω),  ∀t≥s\pary(s+τ,ω)=y(s,θτω),  ∀s∈R (2)

for any .

Note that Definition 1.1 covers both the deterministic periodic path and the random fixed point (c.f. [1]), also known as stationary point as its special cases. In general, random periodic solutions cannot be solved explicitly. For the dissipative system generated from some SDE with a global Lipchitz condition, the convergences of a forward Euler-Marymaya method and a modified Milstein method to the random period solution have been investigated in [5]. For SDEs with a monotone drift condition, one benefits a flexible choice of stepsize from applying the implicit method instead [19]. Each of these numerical schemes admits their own random periodic solution, which approximates the random periodic solution of the targeted SDE as the stepsize decreases. The main challenge lies in proving a convergence of an infinite time horizon. In this paper, we consider approximating the random periodic trajectory of SPDEs, where, we encounter an additional obstacle of simulating infinite-dimensional objects. For this, we employ the spectral Galerkin method (c.f. [13]) for spatial dimension reduction and construct a discrete exponential integrator scheme based on the spatial discretization. The Galerkin-type method has been intensively used to simulate solutions of parabolic SPDEs over finite-time horizon [10, 11, 12, 14, 15, 17], and it is recently applied to approximate stationary distributions for SPDEs [2]. For the error analysis of both strong and weak approximation of semilinear stochastic evolution equations (SEEs) through Galerkin approximation, we refer the reader to the monograph [16].

Let and be two separable -Hilbert spaces. For a given with we denote by

a filtered probability space satisfying the usual conditions. By

we denote an -Wiener process on with associated covariance operator , which is not necessarily assumed to be of finite trace. Denote by the set of all Hilbert-Schmidt operators from to .

Our goal is to study and approximate the random periodic mild solution to SEEs of the form

 {dXt0t=[−AXt0t+f(t,Xt0t)]dt+g(t)dW(t),for t∈(t0,T],Xt0t0=ξ. (3)

Throughout the paper, we impose the following essential assumptions.

###### Assumption 1.1.

The linear operator is densely defined, self-adjoint, and positive definite with compact inverse.

Assumption 1.1 implies the existence of a positive, increasing sequence such that with , and of an orthonormal basis of such that for every . Indeed we have that

 dom(A):={x∈H:∞∑n=1λ2n(x,en)2<∞}.

In addition, it also follows from Assumption 1.1 that is the infinitesimal generator of an analytic semigroup of contractions. More precisely, the family enjoys the properties

 S(0) =Id∈L(H), S(s+t) =S(s)∘S(t)=S(t)∘S(s),for all s,t∈[0,∞),

and

 supt∈[0,∞)∥S(t)∥L(H)≤1. (4)

Further, let us introduce fractional powers of , which are used to measure the (spatial) regularity of the mild solution (7). For any we define the operator by

 Ar2x:=∞∑j=1λr2j(x,ej)ej,for all x∈dom(Ar2). (5)

Then, by setting , we obtain a family of separable Hilbert spaces. Note that, is a subspace of for .

###### Assumption 1.2.

The initial value satisfies . Denote by a constant such that .

###### Assumption 1.3.

The mapping is continuous and periodic in time with period . Moreover, there exists a such that

 ∥f(t,u1)−f(t,u2)∥≤Cf∥u1−u2∥ ∥f(t1,u)−f(t2,u)∥≤Cf(1+∥u∥)|t1−t2|12 ⟨f(t,u1)−f(t,u2),u1−u2⟩≤−Cf∥u1−u2∥2

for all and .

From Assumption 1.3 we directly deduce a linear growth bound of the form

 ∥f(t,u)∥≤^Cf(1+∥u∥),for all t∈R,u∈H, (6)

where .

###### Assumption 1.4.

The mapping is continuous and periodic in time with period with . In addition, for .

Under these assumptions the SEE (3) admits a unique mild solution to (3) is uniquely determined by the variation-of-constants formula (c.f. [3])

 Xt0t(ξ)=S(t−t0)ξ+∫tt0S(t−s)f(s,Xt0s)ds+∫tt0S(t−s)g(s)dW(s), (7)

which holds -almost surely for all .

### 1.1 The pull-back

To ensure the existence of random periodic solution, we need some additional assumptions on the Wiener process and on and :

.

###### Assumption 1.6.

There exists a standard -preserving ergodic Wiener shift such that for , ie,

 P∘(θtW(s))−1=P∘(W(t+s)−W(s))−1.

Denote by the solution starting from time . The uniform boundedness of can be guaranteed under Assumption 1.1 to 1.5. Further, under Assumption 1.1 to 1.6, it is easy to show that when , the pull-back has a unique limit in and is the random periodic solution of SEE (3), satisfying

 X∗t(ξ)=∫t−∞S(t−s)f(s,X∗s)ds+∫t−∞S(t−s)g(s)dW(s). (8)

More details can be found in Section 3. Besides, the continuity of is characterized in Section 3 for error analysis in Section 4, with an additional assumption imposed:

###### Assumption 1.7.

There exists a such that .

### 1.2 The Galerkin approximation

Next, we formulate the assumptions on the spatial discretization. To this end, define finite subspaces of spanned by the first eigenfunctions of the basis, ie, , and let be the orthogonal projection. Note that for any . By doing this, we are able to further introduce the following notations: , , and . Then the Galerkin approximation to (3) can be formulated as follows

 {dXn,t0t=[−AnXn,t0t+fn(t,Xn,t0t)]dt+gn(t)dW(t),for t∈(t0,T],Xn,t0t0=Pnξ. (9)

Applying the spectral Galerkin method results in a system of finite dimensional stochastic differential equations. Note that for , we have that , and .

###### Remark 1.

It is easy to see there exists an isometry between and . An simply calculation leads to the existence of a unique strong solution to (9). The uniform boundedness of as well as the existence of the random periodic solution to (9) are simple consequences of the corresponding properties of .

Let us fix an equidistant partition with stepsize . Note that stretch along the real line because we are dealing with an infinite time horizon problem. Then to simulate the solution to (9) starting at , the discrete exponential integrator scheme on is given by the recursion

 (10)

for all , where the initial value . Moreover, if we define

 ¯X−kτt=^X−kτ−kτ+jh  and Λ(t)=−kτ+jh

for , it follows that the continuous version of (10) is therefore

 (11)

with differential form

 d^Xn,−kτt (12)

In Section 4, we show the uniform boundedness of by imposing another assumption on and :

###### Assumption 1.8.

and .

We conclude the random periodicity of the spatio-temporal discrete scheme (11) in Theorem 4.1 and determine a strong order to approximate from (11) in Theorem 4.2. Compared to the convergence in SDE cases in [5, 19], for the SEE case it is required that the approximation trajectory starting from with rather than an arbitrary starting point in , which guarantees the continuity of the limiting path, ie, . An interesting observation from it is, the order of convergence directly depends on the space where the initial point lives on, ie, . As the convergence from to the random periodic path is independent of the initial condition, we end up this paper with Corollary 4.1 which determines a strong but not optimal order for approximating .

## 2 Preliminaries

In this section we present a few useful mathematical tools for later use.

###### Proposition 2.1.

Under the condition of the infinitesimal generator in Assumption 1.1 for the semigroup , the following properties hold:

1. For any , there exists a constant such that

 ∥A−ν(S(t)−Id)∥L(H)≤C1(ν)tν% for t≥0. (13)

In addition,

 ∥A−ν∥L(H)≤λ−ν1. (14)
2. For any , there exists a constant such that

 ∥AμS(t)∥L(H)≤C2(μ)t−μfor t>0. (15)
3. For the orthogonal projection , it holds that

 ∥S(t)(Id−Pn)∥L(H)≤e−λn+1t,% for t>0. (16)
###### Proof.

The proof for the first two inequalities can be found in [18]. For the last one, note for any , we have the decomposition . Clearly is a linear operator from to . Then the induced norm (indeed we consider its square for convenience) for it is therefore

 ∥S(t)(Id−Pn)∥2L(H) =supx∈H,∥x∥=1∥S(t)(Id−Pn)x∥2=supx∈H,∥x∥=1∞∑i=n+1e−2λit(x,ei)2 ≤e−2λn+1tsupx∈H,∥x∥=1∞∑i=1(x,ei)2≤e−2λn+1t∥x∥2.

As one of the main tools, Gamma function is presented:

 Γ(ν):=∫∞0xν−1e−xdx<∞  for\ ν>0. (17)

## 3 Existence and uniqueness of random periodic solution

In the following, we shall show the boundedness of the solution to SEE (3) and characterize its dependence on the initial condition. The proof simply follows Lemma 3.1 and Lemma 3.2 in [19].

###### Lemma 3.1.

For SEE (3) with the given initial condition and satisfying Assumption 1.1 to 1.5, we have

 supk∈Nsupt>−kτE[∥X−kτt(ξ)∥2]<∞. (18)

If, in addition, for , the mild solution introduced in (7) is well defined in for any , and .

###### Proof.

The fist assertion follows Lemma 3.1 in [19]. It remains to justify the second assertion, by bounding each term of (7) in with some constant independent of and . For the first term on the right hand side of (7), we have

 E[∥Ar2S(t+kτ)ξ∥2]=E[∥S(t+kτ)Ar2ξ∥2]≤E[∥Ar2ξ∥2].

To bound the second term on the right hand side of (7), we apply the linear growth of in (6), Assumption 1.7, Proposition 2.1 and (18), and take as follows:

 E[∥∥Ar2∫t−kτS(t−s)f(s,X−kτs)ds∥∥2] ≤^C2f(1+supk∈Nsups≥−kτE[∥X−kτs∥2])(∫t−kτ∥Ar2S(θ(t−s))∥∥S((1−θ)(t−s))∥ds)2 =^C2fC2(r2)2(1+supk∈Nsups≥−kτE[∥X−kτs∥2])αr−2Γ(1−r2)24,

where we make use of the definition of Gamma function (17) and to get

It remains to estimate the last term of (

7). To achieve it, we shall apply Itô isometry, Assumption 1.7, Proposition 2.1 together with the technique involving Gamma function above:

 E[∥∥A−r2∫t−kτS(t−s)g(s)dW(s)∥∥2]≤C2g∫t−kτ∥A−r2S(t−s)∥2ds≤C2g(2α)r−1Γ(1−r)2.

###### Lemma 3.2.

Assume Assumption 1.1 to 1.5. Denote by and two solutions of SPDE (3) with different initial values and . Then for every , there exists a such that it holds

 ∥X−kτ~t−Y−kτ~t∥2<ϵ (19)

whenever .

With Lemma 3.1, Lemma 3.2 and Assumption 1.6, the existence and uniqueness of the random periodic solution to (3) can be shown following a similar argument in the proof of Theorem 2.4 in [5].

###### Theorem 3.1.

Under Assumption 1.1 to 1.6, there exists a unique random periodic solution such that the solution of (3) satisfies

 limk→∞E[∥X−kτt(ξ)−X∗t∥2]=0. (20)

Note that Theorem 3.1 shows the convergence is regardless of the initial condition , that is, will converge to the unique random periodic solution no matter where it starts from. This observation is crucial in that one may choose a starting point with preferred properties, for instance, the continuity shown in Lemma 3.3.

###### Lemma 3.3.

Recall that for for any fixed . For SEE (3) with the given initial condition for some and satisfying Assumption 1.1 to 1.5 and Assumption 1.7, it holds that for any , there exists a constant depending on and such that

 supk∈Nsupt≥kτE[∥X−kτt−X−kτΛ(t)∥2]≤CX(ν1,r)h2ν1.
###### Proof.

One can easily deduce the following expression from the mild form (7):

 X−kτt(ξ)−X−kτΛ(t)(ξ)=(S(t−Λ(t))−Id)S(Λ(t)+kτ)ξ+∫tΛ(t)S(t−s)f(s,X−kτs)ds+∫Λ(t)−kτ(S(t−Λ(t))−Id)S(Λ(t)−s)f(s,X−kτs)ds+∫tΛ(t)S(t−s)g(s)dW(s)+∫Λ(t)−kτ(S(t−Λ(t))−Id)S(Λ(t)−s)g(s)dW(s). (21)

To get the final assertion, we estimate each term on the right hand in . For the first term, we have that

 E[∥∥(S(t−Λ(t))−Id)S(Λ(t)+kτ)ξ∥∥2] =E[∥∥A−ν1(S(t−Λ(t))−Id)A−(r2−ν1)S(Λ(t)+kτ)Aξ∥∥2] ≤∥A−ν1(S(t−Λ(t))−Id)∥2L(H)∥A−(r2−ν1)∥2L(H)∥S(Λ(t)+kτ)∥2L(H)E[∥Ar2ξ∥2]

where Proposition 2.1 is applied for the last line. For the second term of (21), by making use of the linear growth of and Hölder’s inequality, we can easily get

 E[∥∥∫tΛ(t)S(t−s)f(s,X−kτs)ds∥∥2]≤2^C2fh2(1+supt>−kτE[∥X−kτs∥2]).

Similarly for the fourth term of (21), through Itô isometry we have that

 E[∥∥∫tΛ(t)S(t−s)g(s)dW(s)∥∥2]=∫tΛ(t)∥S(t−s)g(s)∥L20ds≤2σ2h.

For the third term of (21), applying Assumption 1.1, Proposition 2.1, and defining yield the following estimate

 E[∥∥∫Λ(t)−kτ(S(t−Λ(t))−Id)S(Λ(t)−s)f(s,X−kτs)ds∥∥2]=E[∥∥∫Λ(t)−kτA−ν1(S(t−Λ(t))−Id)Aν1S(Λ(t)−s)f(s,X−kτs)ds∥∥2]≤C1(ν1)2h2ν1∫Λ(t)−kτ∥Aν1S(Λ(t)−s)∥L(H)ds∫Λ(t)−kτ∥Aν1S(Λ(t)−s)∥L(H)E[∥f(s,X−kτs)∥2]ds≤^C2f(1+supksupt>−kτE[∥X−kτs∥2])C1(ν1)2h2ν1(∫Λ(t)+kτ0∥Aν1S(θs)S((1−θ)s)∥L(H)ds)2≤^C2f(1+supksupt>−kτE[∥X−kτs∥2])C1(ν1)2h2ν1C2(ν1)2(∫Λ(t)+kτ0(θs)−ν1e−α(1−θ)sds)2≤^C2f(1+supksupt>−kτE[∥X−kτs∥2])C1(ν1)2h2ν1C2(ν1)2α2(ν1−1)Γ(1−ν1)24, (22)

where we change variable to deduce the integral in the fourth line and apply Gamma function (17) to get the last line.

For the last term of (21), using Itô isometry and the definition of Gamma function we have that

 (23)

One will see that the continuity of the true solution in Lemma 3.3 plays an important role in later analysis.

## 4 The random periodic solution of Galerkin numerical approximation

This section is devoted to the existence and uniqueness of the random periodic solution for Galerkin-type spatio-temporal discretization defined in (11), and its convergence to the random periodic solution of our target SPDE (3).

###### Lemma 4.1.

Under Assumption 1.1 to 1.4, for the continuous version of the numerical scheme defined in (11) with stepsize , there exists a constant , which depends on , and , such that

 E[∥^Xn,−kτt−¯Xn,−kτt∥2]≤Cnh(1+E[∥∥¯Xn,−kτΛ(t)∥∥2]). (24)
###### Proof.

From (12) we get that

 (25)

For the first term on the right hand side, we have that

 E[∥(S(t−Λ(t))−Id)¯Xn,−kτt∥2]=E[∥∥n∑i=1(e−λi(t−Λ(t))−1)(ei,¯Xn,−kτt)ei∥∥2]≤(e−λn(t−Λ(t))−1)2E[∥¯Xn,−kτt∥2]≤λ2nh2E[∥∥¯Xn,−kτΛ(t)∥∥2], (26)

where we use the fact for to derive the last inequality.

For the second term on the right hand side of (25), we have that

 ≤∫tΛ(t)∥S(t−Λ(s))∥2L(H)ds∫tΛ(t)E[∥∥fn(Λ(s),¯Xn,−kτs)∥∥2]ds ≤h2^C2f(1+E[∥∥¯Xn,−kτΛ(t)∥∥2]),

where we apply the Hölder inequality to deduce the second line and make use of the linear growth of to get the last line.

For the last term on the right hand side of (25), through Itô isometry, Assumption 1.1 and Assumption 1.4 we have that

 E[∥∥∫tΛ(t)S(t−Λ(s))gn(Λ(s))dW(s)∥∥2]=∫tΛ(t)E[∥S(t−Λ(s))gn(Λ(s))∥∥2L20]ds≤hC2g.

###### Lemma 4.2.

Under Assumption 1.1 to Assumption 1.4 and Assumption 1.8, consider the solution of SEE (3) with the initial condition and its numerical simulation from (11) with the stepsize satisfying

 (5^Cf√λn(1+Cnh)+2Cf√Cn)√h≤2Cf. (27)

Then we have

 supk∈Nsupt>−kτE[∥^Xn,−kτt(ξ)∥2]≤C2ξ+2(^CfCf+^Cf+C2g)2λ1−^Cf. (28)

If, in addition, for , the numerical solution introduced in (11) is well defined in for any , and .

###### Proof.

Applying Itô formula to , where we consider the differential form (12), and taking the expectation yield

 e2λtE[∥^Xn,−kτt(ξ)∥2]=e−2λkτE[∥ξ∥2]+2λ∫t−kτe2λsE[∥^Xn,−kτs∥2]ds−2∫t−kτe2λsE(^Xn,−kτs,A^Xn,−kτs)ds+2∫t−kτe2λsE(^Xn,−kτs,S(s−Λ(s))fn(Λ(s),¯Xn,−kτs))ds+2∫t−kτe2λs∥∥S(s−Λ(s))gn(