1. The Model
1.1. General Form of the Equation
Throughout this work we fix a final time . Let be a Hilbert space with inner product . Let be some negative definite selfadjoint operator on with compact resolvent and domain . We write . The general model we are interested in is given by the following equation in :
(3) 
together with initial condition . Here,
is a (possibly nonlinear)
measurable
operator,
is a cylindrical Wiener process on ,
and is of HilbertSchmidt type. As we need weak solutions only, the stochastic basis as well as the cylindrical Wiener process need not to be determined in advance.
The number is the unknown parameter to be estimated.
For simplicity, we restrict ourselves to the case . For later use, we introduce some notations. Let
be an ONB of eigenvectors of
such that the corresponding eigenvalues (taking into account multiplicity)
are ordered increasingly. For , the projection onto the span of is called . The Sobolev norms on the spaces will be denoted by . The following Poincarétype inequalities hold for :For our analysis, the regularity spaces
(4) 
will be crucial. Let . We say that (3) has a weak solution^{1}^{1}1More precisely, this solution is weak in the probabilistic sense as well as in the sense of usual PDE theory. in on if there is a stochastic basis together with a cylindrical Wiener process on and an adapted process such that
(5) 
a.s. for . We say that ”is” a weak solution to (3) if a stochastic basis and a cylindrical Wiener process can be found such that (5) holds. We need the following class of assumptions, parametrized by :

The observed process is a weak solution to (3) on with a.s.
Sufficient conditions for the existence of solutions to (3) can be derived e.g. with the help of [14] or [20]. For , the projected process satisfies
(6) 
Throughout this work we assume that the eigenvalues of have polynomial growth, i.e. there exist such that
(7) 
In particular, . Here, denotes asymptotic equivalence of two sequences of positive numbers , in the sense that . Similarly, means for a constant independent of .
1.2. Statistical Inference
We describe three estimators for (see [3]), which correspond to different levels of knowledge about the solution trajectory . All estimators depend on a contrast parameter .

Assume we observe just the projected solution . In this case, we need to replace the term by and consider the estimator:
(10) 
In any of the preceding observation schemes, we may leave out the nonlinear term completely:
(11)
For notational convenience, we suppress the dependence on of all estimators.
1.3. The Main Result
In order to state the main theorem of this paper, let us introduce some further conditions on the nonlinearity , indexed by :

There is , an integrable function and a continuous function such that
(12) for any and .
Equivalently, we may choose to be just locally bounded, because in this case there is a continuous with . We call the excess regularity of .^{2}^{2}2Of course, the choice of is not unique.

There is , and a continuous function such that
(13) for and .
Condition is sufficient to carry out a perturbation argument with respect to the linear case. Condition is sufficient to formalize the intuition that
should not be worse than , given that the nonlinear behavior is taken into account at least partially in the bias term.
The regularity must be chosen maximally, i.e. is the maximal value such that holds with probability one. Under the standing assumption , we have the following result:
Theorem 1.
Assume and hold with maximal . Let .

The estimators , , are consistent for .

is asymptotically normal. More precisely,
(14) in distribution as .

Assume holds with parameters and . If , then (14) holds with replaced by . Otherwise, converges to with rate bounded from below by .

If , where is as in , then (14) holds with replaced by either or . Otherwise, the rate of convergence of and to is bounded from below by .
Remark.

If is a solution to the twodimensional stochastic NavierStokes equations with additive noise and periodic or Dirichlet boundary conditions, we reobtain the results from [3].

While the conditions and are natural conditions satisfied by a big class of examples, we do not claim that they are necessary for the conclusions of Theorem 1 to hold. Indeed, if and belong to a class of linear differential operators, [7] and subsequent works prove that an estimator of the type is consistent and asymptotically normal for if and only if
(15) or equivalently, , where is the dimension of state space. In particular, the degree of may exceed the degree of .

Elementary considerations show that the asymptotic variance in (14) is minimal for , whereas the convergence rate is not affected by the choice of . In the ideal setting of full information that we study in this work, it is possible to reconstruct (and hence also the regularity ) from the observed trajectory via its quadratic variation process, so we may set right from the beginning. In the case , this corresponds to the true maximum likelihood estimator. In the case of incomplete information on , for example timediscrete observations, which will be studied in future work, the parameter can be used to ensure the divergence of the denominator of the estimators (whose expected value corresponds to the Fisher information).

Note that the asymptotic variance depends itself on the unknown parameter
. This means that in order to construct confidence intervals it is necessary to modify
(14) in a suitable way. This can be done by means of a variancestabilizing transform (see e.g. [23, Section 3.2]). Alternatively, Slutsky’s lemma can be used together with any of the consistent estimators for , e.g.(16) 
In general, it is not to be expected that holds, whereas with is valid for a broad class of examples.

Instead of a.s. we may just assume a.s. for any , where is maximal with this property. In this case, the results are still true up to minor technical modifications. For instance, we have to assume instead of that holds for some with in order to to ensure additional regularity for the nonlinear part. The proof of Theorem 1 remains valid up to obvious notational changes.

It is possible to allow for dependent nonlinearities . In this case, it suffices to assume that and hold almost surely in such a way that and are deterministic, while , and are allowed to depend on . In particular, it is possible to extend the result to solutions of nonMarkovian functional SDEs whose nonlinearity depends on the whole solution trajectory .
2. Applications
We now illustrate the general theory by means of some examples. More precisely, we show that and/or hold. We write whenever the nonlinearity in these examples does not depend on time explicitly.
2.1. The Linear Case
For completeness, we restate the result for the purely linear case . In this case we can drop condition . All estimators coincide, i.e. , and Theorem 1 reads as follows:
Corollary 2.
If , then
(17) 
in distribution as .
2.2. ReactionDiffusionSystems
In this section, we consider a bounded domain , , with Dirichlet boundary conditions.^{3}^{3}3The argument does not depend on the boundary conditions, so Neumann or Robintype conditions may be used instead. Set , where is the number of coupled equations. is the Laplacian with domain . Let be a Nemytskiitype operator on , i.e. , whose components are polynomials in variables. The highest degree of the component polynomials of will be denoted by . We assume that .
Example 3.
One may choose the AllenCahntype nonlinearity .
The corresponding SPDE
(18) 
is assumed to satisfy for a suitable (maximal) .
Proposition 4.

If , then holds with .

If , then holds with .
Proof.

We have to control the term . Note that in order to control the norm , it suffices to control its onedimensional components, so w.l.o.g. we assume . Taking into account the triangle inequality, it suffices to control for some integer . The case is trivial, so let . Now is a closed subspace, and given that , the Sobolev space is closed under multiplication [1]. Thus, for :
(19) where we used the interpolation property of Sobolev spaces.

As before, we can restrict ourselves to the case with . For , the estimate from is trivial, so assume . Again using the algebra property of the Sobolev spaces , we have for :
and the claim follows with and .
∎
Remark.
Note that the same proof allows to cover the more general case of polynomial nonlinearities whose coefficients depend on , as long as these coefficients are regular enough.
Taking into account that the growth rate of the eigenvalues of the Laplacian is given by (see [24], or e.g. [22, Section 13.4]), we get the following result:
Corollary 5.
Let .

If is arbitrary and , then the estimator is asymptotically normal with rate and asymptotic variance given by
(20) 
If and , the same is true for .
Said another way, and are asymptotically normal if is sufficiently regular and the contrast parameter is sufficiently high.
Remark.
Consider the important special case and . In this case, is true for any . Furthermore, , so part of Theorem 1 applies. All three estimators are asymptotically normal without further assumptions on the regularity of .
2.3. Burgers’ Equation
We point out that the validity of this example has been conjectured in [2]. Consider the stochastic viscous Burgers’ equation
(21) 
on , , with Dirichlet boundary conditions.^{4}^{4}4As before, the argument does not depend on the boundary conditions being of Dirichlet type. Here,
(22) 
In this setting we have , .
We follow the convention to denote the viscosity parameter by instead of . Likewise, the estimators will be called , and . Existence and uniqueness of a solution to (21) can be shown as in [14]. We need just slightly more regularity, i.e. for some , in order to infer .
Proposition 6.
Property holds for any with .
Proof.
In one spatial dimension, the Sobolev space is an algebra for . So,
∎
Corollary 7.
Let be the maximal regularity of and . Then the estimator is asymptotically normal with rate and asymptotic variance given by
(23) 
Similar calculations show that holds with , which is not sufficient to transfer asymptotic normality to but yields consistency with rate at least .
2.4. Robustness under Model Uncertainty
In the preceding examples we assumed that the dynamical law of the process we are interested in is perfectly known. However, it may be reasonable to consider the case when this is not true. We may formalize such a partially unknown model as
(24) 
where is an unknown perturbation. We assume that the model is wellposed (i.e. holds for suitable ) and that satisfies . Let , and be given by the same terms as before, i.e. and include knowledge on but not on .
Proposition 8.
If satisfies , then , and are consistent.
This follows directly from the discussion in Subsection 4.2, taking into account the decomposition
(25) 
with
(26) 
and similar decompositions for and .
It is easy to verify that if holds for and separately with excess regularity resp. , then a version of holds for as well, with excess regularity . However, in general the excess regularity of can be chosen higher due to cancellation effects of and . A lower bound for the rate of convergence of the estimators is given by .
Corollary 9.

If , then is asymptotically normal with rate .

If and satisfies with , then is asymptotically normal with rate .

If , then asymptotic normality with rate carries over to all estimators.
Said another way, the excess regularity of determines essentially to what extent the results from Theorem 1 remain valid. A high value for corresponds to a small perturbation.
Remarks.

In applications it is common to approximate a complicated nonlinear system by its linearization. From this point of view, the case that itself is linear in (24) becomes relevant. Of course, it is desirable to maintain the statistical properties of the linear model under a broad class of nonlinear perturbations.

It is possible to interpret the nonlinear perturbation as follows: Assume there is a true nonlinearity describing the model precisely. Assume further that we either do not know the form of or we do not want to handle it directly due to its complexity. Instead, we approximate by some nonlinearity which we can control. If our approximation is good (in the sense that holds for with suitable excess regularity), then the quality of the estimators which are merely based on the approximating model can be guaranteed, i.e. they are consistent or even asymptotically normal. The approximating quality of is measured by the excess regularity of .

As is unknown, no knowledge of can be incorporated into the estimators, and condition need not be required to hold for .

The previous examples show that is fulfilled for a broad class of nonlinearities (assuming that is sufficiently high if necessary).
3. Numerical Simulation
We simulate the AllenCahn equation
on with Dirichlet boundary conditions and initial condition . We discretize the equation in Fourier space and simulate modes with a linearimplicit Euler scheme with temporal stepsize up to time . The spatial grid is uniform with mesh . The true parameter is . We have run MonteCarlo simulations for each of the choices and . In any case, we have set . Remember that in this setting all estimators are asymptotically normal.
Figure 1 illustrates consistency, the convergence rate and the asymptotic distribution from Theorem 1.
As expected, the values of and are closer to each other than to . Note that the quality of in this simulation depends on the level of noise given by , with decreasing accuracy under smooth noise. Our interpretation is that the nonlinearity becomes more highlighted if the noise is less rough.
We mention that for simulations with even higher values of (take ), the values of are mostly negative and therefore not related to the true parameter, while and stay consistent. Of course, this effect may be influenced by the number of Fourier modes used for the simulation.
4. Proof of Theorem 1
We follow closely the arguments which have been given in [3]
for the special case of the NavierStokes equations in two dimensions. Using a slightly different version of the central limit theorem (CLT) for local martingales, we obtain a direct proof of the asymptotic normality for
.4.1. Asymptotic Estimates for the Linear Case
First, we recall briefly some results for the case . Consider the linear equation
(27) 
where . We define . Then the are independent onedimensional OrnsteinUhlenbeck processes
(28) 
where are independent onedimensional Brownian motions, and the solutions have the explicit representation
(29) 
Sketch of proof.
Use that and , , are jointly Gaussian with mean zero and
Now follows with the help of . For , use
and . ∎
We write . By multiplying the asymptotic representations from Lemma 10 with resp. and summing up to index , we obtain the following cumulative version if :
(30) 
and if :
(31) 
where
Comments
There are no comments yet.