On quadratic variations of the fractional-white wave equation

This paper studies the behaviour of quadratic variations of a stochastic wave equation driven by a noise that is white in space and fractional in time. Complementing the analysis of quadratic variations in the space component carried out by M. Khalil and C. A. Tudor (2018) and by R. Shevchenko, M. Slaoui and C. A. Tudor (2020), it focuses on the time component of the solution process. For different values of the Hurst parameter a central and a noncentral limit theorems are proved, allowing to construct consistent parameter estimators and compare them to the finding in the space-dependent case. Finally, rectangular quadratic variations in the white noise case are studied and a central limit theorem is demonstrated.

Comments

There are no comments yet.

Authors

• 5 publications
03/06/2019

Generalized k-variations and Hurst parameter estimation for the fractional wave equation via Malliavin calculus

We analyze the generalized k-variations for the solution to the wave equ...
01/04/2019

On central limit theorems for power variations of the solution to the stochastic heat equation

We consider the stochastic heat equation whose solution is observed disc...
08/12/2019

High-frequency analysis of parabolic stochastic PDEs with multiplicative noise: Part I

We consider the stochastic heat equation driven by a multiplicative Gaus...
03/06/2021

Statistical analysis of discretely sampled semilinear SPDEs: a power variation approach

Motivated by problems from statistical analysis for discretely sampled S...
11/16/2019

Galerkin finite element approximation for semilinear stochastic time-tempered fractional wave equations with multiplicative white noise and fractional Gaussian noise

To model wave propagation in inhomogeneous media with frequency-dependen...
10/02/2019

Parameter estimation for SPDEs based on discrete observations in time and space

Parameter estimation for a parabolic, linear, stochastic partial differe...
10/15/2020

Quadratic Variation Estimation of Hidden Markov Process and Related Problems

The partially observed linear Gaussian system of stochastic differential...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Statistical inference for stochastic partial differential equations (SPDEs) is an important and rapidly advancing branch of mathematical statistics. Usually under the framework of a Brownian field driving the equations new areas of applications are emerging (see e.g.

[12] or [1]) and new methods are being developed for estimating the drift and volatility parameters in various settings (see [5] for an extensive survey on the development of the subject in recent years).

One of the classical ideas for parameter estimation consists in considering so-called empirical power variations, that is, sums over increments of the solution process (either in the time or in the space component) raised to some power, see for instance [3] or [4]. In particular, in a recent work [7] an in-depth study of quadratic variations for solutions of parabolic SPDEs is conducted on a space-time grid.

In this context the development of stochastic calculus with respect to the fractional Brownian motion has led naturally to statistical inference for SPDEs driven by fractional noise either in the time or in the space component. Many authors have already investigated this topic over the last few decades (see, for example, [2] and [6]). For such equations the method of power variations can also be used in order to estimate the corresponding Hurst parameter of the driving noise analogously to the classical results for fractional Brownian motion and many associated processes (see the monograph [14] for numerous examples).

In this paper we consider the stochastic wave equation with zero boundary conditions driven by a noise that is fractional in time and white in space. From the point of view of applications a solution to such an equation describes the motion of a randomly perturbed string. This equation and its properties has been described, for instance, in [14] and [2]. In the paper [9] the authors study the behaviour of quadratic variations in the space coordinate for the Hurst parameter varying from to and in [13] the case is being considered. In both works the authors derive and analyse estimators for .

The papers [9] and [13] have served as the starting point for the present manuscript. We study the behaviour of quadratic variations in the time component of the wave equation solution. More precisely, if , denotes the solution to the wave equation with fractional-white noise, we consider the sequence of the centred (empirical) quadratic variations defined by

 (1)

We retrieve a standard threshold for processes in the fractional Brownian context and prove for the sequence a (quantitative) central limit theorem for the Hurst parameter between and as well as a noncentral limit theorem for above , although the limiting object is different from the one obtained in [13] for space-dependent quadratic variations. Using these results and assuming that the mild solution is observed at discrete times and at a fixed space location , we construct an estimator of the parameter H from the observations for . Based on the behaviour of the sequence (1), we prove that the estimator for is strongly consistent and asymptotically normal. Subsequently, we briefly compare this estimator to its space-dependent analogue from [9]. Furthermore, we introduce drift and volatility parameters into the equation and propose strongly consistent and asymptotically normal estimators for those. Finally, in the simpler scenario of Brownian noise (that is, for ) we consider rectangular, i.e. joint space-time, quadratic variations and prove a quantitative central limit theorem in this case. This allows us to construct a drift parameter estimator based on space-time observations and assess its asymptotic properties.

Methodically the results in this paper boil down to a meticulous analysis of the covariance structure of the solution to the wave equation (which is of independent interest) as well as to the application of classical techniques from the Malliavin-Stein toolkit such as the celebrated fourth moment theorem or the study of the cumulants in order to demonstrate convergence in distribution.

The paper is structured as follows. In Section 2 we briefly describe the setting and in Section 3 we study the covariance structure of the solution process in time. In Section 4 the main theorems are proved, namely a central limit theorem for and a noncentral limit theorem for . Sections 5 and 6 deal with estimation questions for different settings related to the wave equation. Finally, in Section 7 several results are collected concerning rectangular quadratic variations in the simple case . The paper ends with a concise appendix containing basic results and definitions from Malliavin calculus.

2 Preliminaries

In this chapter we introduce the fractional-white wave equation and its solution and present the basic definitions used in our work.

The object of our study will be the solution to the following stochastic wave equation

where is a fractional-white Gaussian noise which is defined as a real valued centred Gaussian field

, over a given complete filtered probability space

, with covariance function given by

 E(WH(t,x)WH(s,y))=RH(t,s)min(x,y),∀x,y∈R, (2)

where is the covariance of the fractional Brownian motion

 RH(t,s)=12(t2H+s2H−|t−s|2H),s,t≥0.

We will assume throughout this work

The solution of the equation (2) is understood in the mild sense, that is, it is defined as a square-integrable centered field defined by

 u(t,x)=∫t0∫RG1(t−s,x−y)WH(ds,dy),t≥0,x∈R, (3)

where is the fundamental solution to the wave equation and the integral in (3) is a Wiener integral with respect to the Gaussian process , that is, we have simply

 G1(t,x)=121{|x|

In the course of the paper we use the symbol to denote asymptotic equality (i.e. the ratio is tending to one), the symbol to denote asymptotic equality up to a constant, and the symbol to denote that the left side is asymptotically less or equal to the right side up to a constant (i.e. the ratio is asymptotically bounded by a constant).

3 The temporal covariance structure

The main factor in understanding the behaviour of a Gaussian process is determining its covariance structure which is calculated in this section.

Theorem 1

For he solution process for a fixed has the covariance structure

 4HE[u(t,x)u(s,x)]=1H(2H+1)(t2H+1+s2H+1)−22Ht(t−s)2H+22H+1(t−s)2H+1 for t≥s.

Proof:   The proof for is given in Lemma 1 in Section 7 of this article. For recall first that

 u(t,x)=∫t0∫RG1(t−r,x−y)WH(dr,dy)=12∫t0∫R1{|x−y|

Using the isometry property we have with

 E[u(t,x)u(s,x)] = αH∫t0∫s0∫RG1(t−u,x−y)G1(s−v,x−y)|u−v|2H−2dydvdu = αH4∫t0∫s0∫R1{|x−y|

By direct computation we obtain

 ∫R1{|x−y|

Consequently,

 4αH E[u(t,x)u(s,x)]=∫t0∫s02min(t−u,s−v)|u−v|2H−2dvdu = ∫t0∫s02min(u,v)|t−u−s+v|2H−2dvdu = ∫s0∫s02min(u,v)|t−u−s+v|2H−2dvdu+∫ts∫s02min(u,v)|t−u−s+v|2H−2dvdu = ∫s0∫u02v|t−u−s+v|2H−2dvdu+∫s0∫su2u|t−u−s+v|2H−2dvdu +∫ts∫s02v|t−u−s+v|2H−2dvdu = I1+I2+I3.

Let us assume and analyse the three summands separately.

 I1 = ∫s02v∫sv|t−u−s+v|2H−2dudv = ∫s02v∫2s−ts−t+v|u−v|2H−2dudv = −12H−1∫s02v((v−2s+t)2H−1−(t−s)2H−1)dv = −12H−1∫s02v(v−2s+t)2H−1dv+12H−1(t−s)2H−1∫s02vdv = −12H−1∫t−s−2s+t2(v+2s−t)v2H−1dv+12H−1(t−s)2H−1s2 = 12H−1((t−s)2H−1s2−∫t−s−2s+t2(v+2s−t)v2H−1dv).

For the second summand we obtain

 I2 = ∫s02u∫su|t−u−s+v|2H−2dvdu = ∫s02u∫tt−s+u|u−v|2H−2dvdu = 12H−1∫s02u((t−u)2H−1−(t−s)2H−1)du = 12H−1∫tt−s2(t−u)u2H−1du−12H−1(t−s)2H−1s2 = 12H−1(22Ht(t2H−(t−s)2H)−22H+1(t2H+1−(t−s)2H+1)−(t−s)2H−1s2).

Finally, for the third summand we have

 I3 = ∫s02v∫ts|t−u−s+v|2H−2dudv = ∫s02v∫s2s−t|u−v|2H−2dudv = ∫s02v(∫v2s−t(v−u)2H−2du+∫sv(u−v)2H−2du)dv = 12H−1∫s02v((v−2s+t)2H−1+(s−v)2H−1)dv = 12H−1∫t−s−2s+t2(v+2s−t)v2H−1dv+12H−1∫s02(s−v)v2H−1dv = 12H−1(∫t−s−2s+t2(v+2s−t)v2H−1dv+22Hs2H+1−22H+1s2H+1).

Adding up the summands we obtain the result.
Let us turn to the case . The first summand is

 I1 = ∫s02v∫2s−ts−t+v|u−v|2H−2dudv = ∫2s−t02v∫2s−ts−t+v(u−v)2H−2dudv+∫s2s−t2v∫2s−ts−t+v(v−u)2H−2dudv = 12H−1(∫2s−t02v(t−s)2H−1dv+∫2s−t02v(2s−t−v)2H−1dv −∫s2s−t2v(v−2s+t)2H−1dv+∫s2s−t2v(t−s)2H−1dv) = 12H−1(∫s02vdv(t−s)2H−1+∫2s−t02v(2s−t−v)2H−1dv−∫s2s−t2v(v−2s+t)2H−1dv) = 12H−1((t−s)2H−1s2+∫2s−t02v(2s−t−v)2H−1dv−∫s2s−t2v(v−2s+t)2H−1dv).

The second summand is the same as above, and for the third summand we obtain

 I3 = ∫s02v∫s2s−t|u−v|2H−2dudv = ∫2s−t02v∫s2s−t(u−v)2H−2dudv+∫s2s−t2v∫v2s−t(v−u)2H−2dudv+∫s2s−t2v∫sv(u−v)2H−2dudv = 12H−1(∫2s−t02v(s−v)2H−1dv−∫2s−t02v(2s−t−v)2H−1dv +∫s2s−t2v(v−2s+t)2H−1dv+∫s2s−t2v(s−v)2H−1dv) = 12H−1(∫s02(s−v)v2H−1dv−∫2s−t02v(2s−t−v)2H−1dv+∫s2s−t2v(v−2s+t)2H−1dv) = 12H−1(22Hs2H+1−22H+1s2H+1−∫2s−t02v(2s−t−v)2H−1dv+∫s2s−t2v(v−2s+t)2H−1dv).

Summing up , and yields the same result.

There are several remarks to be made concerning this result. First, the covariance is independent of space. Moreover, since the solution is Gaussian it follows directly from the covariance formula that it is a self-similar process in time. It can also be concluded from the formula that the process has a version with continuous paths with Hölder index below , since

 E[(u(t,x)−u(s,x))2]≲|t−s|2H for t,s≥0,

and by Gaussianity

 E[(u(t,x)−u(s,x))2m]≲|t−s|2Hm for t,s≥0,

for . The statement now follows by Kolmogorov’s continuity criterion.

Next statements are concerned with the asymptotics of the covariance.

Remark 1

In particular, we obtain for covariance of the increments:

 4HE[(u(iN,x)−u(i−1N,x))(u(jN,x)−u(j−1N,x))] =1N2H+1(22H(i(i−j+1)2H−i(i−j)2H+(i−1)(i−j−1)2H−(i−1)(i−j)2H) −22H+1((i−j−1)2H+1−2(i−j)2H+1+(i−j+1)2H+1))

if and

 4HE[(u(i+1N,x)−u(iN,x))2]=21N2H+1(iH+1H(2H+1)).
Corollary 1

Note that for we can write the covariance function as follows:

 E[(u(iN,x)−u(i−1N,x))(u(jN,x)−u(j−1N,x))] =H4N2H+1(ψ1(i−j)+iψ2(i−j)),

where

 ψ1(k)=22H(k2H−(k−1)2H)−22H+1((k+1)2H+1−2k2H+1+(k−1)2H+1)

and

 ψ2(k)=22H((k+1)2H−2k2H+(k−1)2H)

with the following asymptotics for large :

 ψ1(k)∼2(1−2H)k2H−1,ψ2(k)∼2(2H−1)k2H−2.

These expressions are obtained using the binomial expansion applied for , , and .
We, moreover, obtain for large using the same asymptotics:

 ψ1(i−j)+iψ2(i−j)∼(2H−1)(2j−1)(i−j)2H−2.

4 The temporal quadratic variations

For the solution of the wave equation we define its quadratic variation in time,

For simplicity let us denote for some fixed .

4.1 Renormalization of Vn

Proposition 1

As tends to infinity, we have asymptotically for and for up to some constants made exact in the proof.

Proof:   We have by reordering the sum and putting together the non-diagonal summands that appear twice

 E[V2N] = 1N2N−1∑i=0N−1∑j=02E[(ui+1−ui)(uj+1−uj)]2 = 4N2N−1∑j=0N−1∑i=j+1E[(ui+1−ui)(uj+1−uj)]2+2N2N−1∑i=0E[(ui+1−ui)2]2=:vN,1+vN,2.

The non-diagonal summands with less than a certain constant are at most of order and can therefore be ignored in the asymptotics up to constants. We obtain

 vN,1 ∼ 4N2H216N4H+2N−1∑j=0N−1∑i=j+1(2(2H−1)j(i−j)2H−2)2 = 16(2H−1)2N2H216N4H+2N−1∑j=0j2N−j−1∑k=1k4H−4.

If , is not summable and is asymptotically equal to

 (2H−1)2(4H−3)N2H2N4H+2N−1∑j=0j2(N−j−1)4H−3 =(2H−1)2(4H−3)N2H2N4H+2N−1∑j=0(N−j−1)2j4H−3 ∼2(2H−1)2(4H−3)…(4H)N2H2N4H+2N4H =H(2H−1)4(4H−1)(4H−3)N−4.

If , is summable. To obtain the precise constant we recall that

 vN,1 = 4N2H216N4H+2N−1∑i=0i−1∑j=0(ψ1(i−j)+iψ2(i−j))2 = 4N2H216N4H+2N−1∑i=0i−1∑j=0(ψ1(i−j)2+2iψ2(i−j)ψ1(i−j)+i2ψ2(i−j)2).

One can easily see with Corollary 1 that the first two summands are of order while the third one is of order and dominates the other two. Therefore, we have

 limN→∞N4H+1vN,1=limN→∞N4H+14N2H216N4H+2N−1∑i=0i2i−1∑j=0ψ2(i−j)2 =limN→∞H24N−3N∑j=1ψ2(j)2N∑i=ji2=H212∞∑j=1ψ2(j)2,

which is summable.

Finally, for the diagonal we calculate

 vN,2 = 2N2N−1∑i=0(H21N2H+1(iH+1H(2H+1)))2 = H22N4H+4N−1∑i=0(iH+1H(2H+1))2 ∼ H22N4H+4N−1∑i=0(iH)2∼16N−4H−1.

For the term is slower than , and the claim follows with nonzero limiting constants.
More precisely, for we obtain

 limN→∞N4H+1EV2N=H212∞∑j=1ψ2(j)2+16=H224∞∑j=−∞ψ2(|j|)2.

For we can write

 N4E(V2N) = H24N4HN−1∑i=0i−1∑j=0ψ1(i−j)2+H22N4HN−1∑i=0i−1∑j=0iψ1(i−j)ψ2(i−j) +H24N4HN−1∑i=0i−1∑j=0i2ψ2(i−j)+N4vN,2 limN→∞N4E(V2N) = limN→∞H24N4HN−1∑j=0N−1∑i=j+1ψ1(j)2+H22N4HN−1∑j=0ψ1(j)ψ2(j)N−1∑i=j+1i +H24N4HN−1∑j=0ψ2(j)2N−1∑i=j+1i2,

which defines the normalising constant.

Remark 2

Note that with the notation from [9], namely

 φH(k)=12((k+1)2H−2k2H+(k−1)2H),

the precise limiting constant for equals

 σ2=16∑k∈ZφH(k)2.

Now we know which normalisation is needed to prove limit theorems. We consider . For we have pointwise.

4.2 Central limit theorem and rate of convergence

To establish the central limit theorem of the quadratic variations, we will use tools from the Malliavin-Stein framework. A short introduction of the necessary terminology and classical identities can be found in the Appendix. The principal statement necessary for the proof of the theorem is Theorem 5.2.6 in [10], which is a version of the fourth moment theorem. For convenience of the reader we recall it in the following.

Theorem 2

Fix . Let with

, be a sequence of random variables belonging to the

th Wiener chaos such that

 E(G2N)N→∞→s2>0.

Then converges in law to if and only if

 ∥DGN∥2HN→∞→qs2.

Furthermore,

 d(GN;N(0,1))≤C√Var(1q∥DGN∥2H),

where is either the distance of Kolmogorov, the distance in Total Variation or the Wasserstein distance.

From now on, fix and denote by the Hilbert space associated to the Gaussian solution process . This Hilbert space is defined as the closure of the set of indicator functions with respect to the inner product

 ⟨1[0,t],1[0,s]⟩H=E(u(t,x)u(s,x)),for a fixedx∈R.

Denoting by the th multiple integral with respect to , we can write

 VN=I2(1NN−1∑i=01⊗2[iN,i+1N])

using the product rule (7). Now we can formulate the central limit theorem for , the normalised version of .

Theorem 3

For the sequence converges in law to as tends to infinity. Moreover,

 d(FN,N(0,1))≲⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩N−12if H∈[12,58) ,N−12log(N)32if H=58,N4H−3if H∈(58,34)