## 1 Introduction

Statistical inference for stochastic partial differential equations (SPDEs) is an important and rapidly advancing branch of mathematical statistics. Usually under the framework of a Brownian field driving the equations new areas of applications are emerging (see e.g.

[12] or [1]) and new methods are being developed for estimating the drift and volatility parameters in various settings (see [5] for an extensive survey on the development of the subject in recent years).One of the classical ideas for parameter estimation consists in considering so-called empirical power variations, that is, sums over increments of the solution process (either in the time or in the space component) raised to some power, see for instance [3] or [4]. In particular, in a recent work [7] an in-depth study of quadratic variations for solutions of parabolic SPDEs is conducted on a space-time grid.

In this context the development of stochastic calculus with respect to the fractional Brownian motion has led naturally to statistical inference for SPDEs driven by fractional noise either in the time or in the space component. Many authors have already investigated this topic over the last few decades (see, for example, [2] and [6]). For such equations the method of power variations can also be used in order to estimate the corresponding Hurst parameter of the driving noise analogously to the classical results for fractional Brownian motion and many associated processes (see the monograph [14] for numerous examples).

In this paper we consider the stochastic wave equation with zero boundary conditions driven by a noise that is fractional in time and white in space. From the point of view of applications a solution to such an equation describes the motion of a randomly perturbed string. This equation and its properties has been described, for instance, in [14] and [2]. In the paper [9] the authors study the behaviour of quadratic variations in the space coordinate for the Hurst parameter varying from to and in [13] the case is being considered. In both works the authors derive and analyse estimators for .

The papers [9] and [13] have served as the starting point for the present manuscript. We study the behaviour of quadratic variations in the time component of the wave equation solution. More precisely, if , denotes the solution to the wave equation with fractional-white noise, we consider the sequence of the centred (empirical) quadratic variations defined by

(1) |

We retrieve a standard threshold for processes in the fractional Brownian context and prove for the sequence a (quantitative) central limit theorem for the Hurst parameter between and as well as a noncentral limit theorem for above , although the limiting object is different from the one obtained in [13] for space-dependent quadratic variations. Using these results and assuming that the mild solution is observed at discrete times and at a fixed space location , we construct an estimator of the parameter H from the observations for . Based on the behaviour of the sequence (1), we prove that the estimator for is strongly consistent and asymptotically normal. Subsequently, we briefly compare this estimator to its space-dependent analogue from [9]. Furthermore, we introduce drift and volatility parameters into the equation and propose strongly consistent and asymptotically normal estimators for those. Finally, in the simpler scenario of Brownian noise (that is, for ) we consider rectangular, i.e. joint space-time, quadratic variations and prove a quantitative central limit theorem in this case. This allows us to construct a drift parameter estimator based on space-time observations and assess its asymptotic properties.

Methodically the results in this paper boil down to a meticulous analysis of the covariance structure of the solution to the wave equation (which is of independent interest) as well as to the application of classical techniques from the Malliavin-Stein toolkit such as the celebrated fourth moment theorem or the study of the cumulants in order to demonstrate convergence in distribution.

The paper is structured as follows. In Section 2 we briefly describe the setting and in Section 3 we study the covariance structure of the solution process in time. In Section 4 the main theorems are proved, namely a central limit theorem for and a noncentral limit theorem for . Sections 5 and 6 deal with estimation questions for different settings related to the wave equation. Finally, in Section 7 several results are collected concerning rectangular quadratic variations in the simple case . The paper ends with a concise appendix containing basic results and definitions from Malliavin calculus.

## 2 Preliminaries

In this chapter we introduce the fractional-white wave equation and its solution and present the basic definitions used in our work.

The object of our study will be the solution to the following stochastic wave equation

where is a fractional-white Gaussian noise which is defined as a real valued centred Gaussian field

, over a given complete filtered probability space

, with covariance function given by(2) |

where is the covariance of the fractional Brownian motion

We will assume throughout this work

The solution of the equation (2) is understood in the mild sense, that is, it is defined as a square-integrable centered field defined by

(3) |

where is the fundamental solution to the wave equation and the integral in (3) is a Wiener integral with respect to the Gaussian process , that is, we have simply

(4) |

In the course of the paper we use the symbol to denote asymptotic equality (i.e. the ratio is tending to one), the symbol to denote asymptotic equality up to a constant, and the symbol to denote that the left side is asymptotically less or equal to the right side up to a constant (i.e. the ratio is asymptotically bounded by a constant).

## 3 The temporal covariance structure

The main factor in understanding the behaviour of a Gaussian process is determining its covariance structure which is calculated in this section.

###### Theorem 1

For he solution process for a fixed has the covariance structure

Proof: The proof for is given in Lemma 1 in Section 7 of this article. For recall first that

Using the isometry property we have with

By direct computation we obtain

Consequently,

Let us assume and analyse the three summands separately.

For the second summand we obtain

Finally, for the third summand we have

Adding up the summands we obtain the result.

Let us turn to the case . The first summand is

The second summand is the same as above, and for the third summand we obtain

Summing up , and yields the same result.

There are several remarks to be made concerning this result. First, the covariance is independent of space. Moreover, since the solution is Gaussian it follows directly from the covariance formula that it is a self-similar process in time. It can also be concluded from the formula that the process has a version with continuous paths with Hölder index below , since

and by Gaussianity

for . The statement now follows by Kolmogorov’s continuity criterion.

Next statements are concerned with the asymptotics of the covariance.

###### Remark 1

In particular, we obtain for covariance of the increments:

if and

###### Corollary 1

Note that for we can write the covariance function as follows:

where

and

with the following asymptotics for large :

These expressions are obtained using the binomial expansion applied for , , and .

We, moreover, obtain for large using the same asymptotics:

## 4 The temporal quadratic variations

For the solution of the wave equation we define its quadratic variation in time,

For simplicity let us denote for some fixed .

### 4.1 Renormalization of

###### Proposition 1

As tends to infinity, we have asymptotically for and for up to some constants made exact in the proof.

Proof: We have by reordering the sum and putting together the non-diagonal summands that appear twice

The non-diagonal summands with less than a certain constant are at most of order and can therefore be ignored in the asymptotics up to constants. We obtain

If , is not summable and is asymptotically equal to

If , is summable. To obtain the precise constant we recall that

One can easily see with Corollary 1 that the first two summands are of order while the third one is of order and dominates the other two. Therefore, we have

which is summable.

Finally, for the diagonal we calculate

For the term is slower than , and the claim follows with nonzero limiting constants.

More precisely, for we obtain

For we can write

which defines the normalising constant.

###### Remark 2

Now we know which normalisation is needed to prove limit theorems. We consider . For we have pointwise.

### 4.2 Central limit theorem and rate of convergence

To establish the central limit theorem of the quadratic variations, we will use tools from the Malliavin-Stein framework. A short introduction of the necessary terminology and classical identities can be found in the Appendix. The principal statement necessary for the proof of the theorem is Theorem 5.2.6 in [10], which is a version of the fourth moment theorem. For convenience of the reader we recall it in the following.

###### Theorem 2

Fix . Let with

, be a sequence of random variables belonging to the

th Wiener chaos such thatThen converges in law to if and only if

Furthermore,

where is either the distance of Kolmogorov, the distance in Total Variation or the Wasserstein distance.

From now on, fix and denote by the Hilbert space associated to the Gaussian solution process . This Hilbert space is defined as the closure of the set of indicator functions with respect to the inner product

Denoting by the th multiple integral with respect to , we can write

using the product rule (7). Now we can formulate the central limit theorem for , the normalised version of .

###### Theorem 3

For the sequence converges in law to as tends to infinity. Moreover,

Comments

There are no comments yet.