# Statistical Analysis of Some Evolution Equations Driven by Space-only Noise

We study the statistical properties of stochastic evolution equations driven by space-only noise, either additive or multiplicative. While forward problems, such as existence, uniqueness, and regularity of the solution, for such equations have been studied, little is known about inverse problems for these equations. We exploit the somewhat unusual structure of the observations coming from these equations that leads to an interesting interplay between classical and non-traditional statistical models. We derive several types of estimators for the drift and/or diffusion coefficients of these equations, and prove their relevant properties.

## Authors

• 7 publications
• 4 publications
• 1 publication
• ### On inverse problems for semiconductor equations

This paper is devoted to the investigation of inverse problems related t...
11/23/2020 ∙ by M. Burger, et al. ∙ 0

• ### Multiplicative noise in Bayesian inverse problems: Well-posedness and consistency of MAP estimators

Multiplicative noise arises in inverse problems when, for example, uncer...
10/31/2019 ∙ by Matthew M. Dunlop, et al. ∙ 0

• ### Impact of the error structure on the design and analysis of enzyme kinetic models

The statistical analysis of enzyme kinetic reactions usually involves mo...
03/17/2021 ∙ by Elham Yousefi, et al. ∙ 0

• ### Discretizations of Stochastic Evolution Equations in Variational Approach Driven by Jump-Diffusion

Stochastic evolution equations with compensated Poisson noise are consid...
12/20/2019 ∙ by Sima Mehri, et al. ∙ 0

• ### Existence, uniqueness, and approximation of solutions of jump-diffusion SDEs with discontinuous drift

In this paper we study jump-diffusion stochastic differential equations ...
12/09/2019 ∙ by Paweł Przybyłowicz, et al. ∙ 0

• ### Parameter estimation for semilinear SPDEs from local measurements

This work contributes to the limited literature on estimating the diffus...
04/30/2020 ∙ by Randolf Altmeyer, et al. ∙ 0

• ### A new efficient operator splitting method for stochastic Maxwell equations

This paper proposes and analyzes a new operator splitting method for sto...
02/21/2021 ∙ by Chuchu Chen, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction

While the forward problems, existence, uniqueness, and regularity of the solution, for stochastic evolution equations have been extensively studied over the past few decades (cf. [13, 14] and references therein), the literature on statistical inference for SPDEs is, relatively speaking, limited. We refer to the recent survey [3] for an overview of the literature and existing methodologies on statistical inference for parabolic SPDEs. In particular, little is known about the inverse problems for stochastic evolutions equations driven by space-only noise, and the main goal of this paper is to investigate the parameter estimation problems for such equations. The somewhat unusual structure of the space-only noise exhibits interesting statistical inference problems that stay at the interface between classical and non-traditional statistical models. We consider two classes of equations, corresponding to two types of noise, additive and multiplicative. As an illustration, let us take a heat equation

 ut=Δu,t>0,

on some domain and with some initial data, and where denotes the Laplacian operator. Customarily, a random perturbation to this equation can be additive

 (1.1) ut=Δu+˙W,

representing a random heat source, or multiplicative

 (1.2) ut=uxx+u˙W,

representing a random potential. In the case of space-dependent noise and pure point spectrum of the Laplacian , one can also consider a shell version of (1.2):

 (1.3) ut=Δu+∑kukξkhk(x),

in which

are the normalized eigenfunctions of the Laplacian,

. Similar decoupling of the Fourier modes is used to study nonlinear equations in fluids mechanics, both deterministic [6, Section 8.7] and stochastic [7, 5]; the term “shell model” often appears in that context.

Our objective is to study abstract versions of (1.3) and (1.1) with unknown coefficients:

 (1.4) ˙u+θAu=σ∞∑k=1qkukhkξk,

and

 (1.5) ˙u+θAu=σ˙WQ,

where

• is a linear operator in a Hilbert space ;

• are the normalized eigenfunctions of that form a complete orthonormal system in

, with corresponding eigenvalues

, ;

• , are known constants;

• , are unknown numbers (parameters of interest);

• , are independent and identically distributed (i.i.d.) standard normal random variables on the underlying probability space

and ;

• , .

In each case, the solution is defined as

 (1.6) u(t)=∞∑k=1uk(t)hk,

with

 (1.7) uk(t)=uk(0)exp(−(θμk−σqkξk)t)

for (1.4), and

 (1.8) uk(t)=uk(0)e−θμkt+σqkθμk(1−e−θμkt)ξk

for (1.5). For both models (1.4) and (1.5), we assume that the observations are available in the Fourier space, namely, the observer measures the values of the Fourier modes , continuously in time for . In addition, for (1.5) we also consider statistical experiment when the observations are performed in physical space. The main results of this paper are summarized as follows:

1. For equation (1.4), knowledge of all is required; then, under some additional technical assumptions, the problem of joint estimation of and , using measurements in the Fourier space, leads to statistical experiment with LAN (local asymptotic normality) and several other regularity properties. Consequently, we prove strong consistency and asymptotic normality of maximum likelihood estimators (MLE) and Bayesian estimators for and ; see Section 2.

2. For equation (1.5), the values of can be determined exactly from the observations of at two or three time points; estimation of

is then reduced to estimation of variance in a normal population with known mean; see Section

3.1. Using special structure of the solution of (1.5), and assuming zero initial conditions, and , we derive consistent and asymptotically normal estimators of and , assuming measurements in the physical domain; see Section 3.2.

In Section 4, we present several illustrative examples, while Section 5 is dedicated to some numerical experiments that exemplify the theoretical results of the paper.

Throughout the paper, given two sequences of numbers and , we write if there exists a positive number such that .

## 2. The Shell Model

In this section we study the stochastic evolution equation (1.4), starting with the existence and uniqueness of the solution, and continuing with parameter estimation problem for and within the LAN framework of [9]. For better comparison with existing results, such as [1] and [8], we consider a slightly more general version of (1.4):

 (2.1) ˙u+(θA+A0)u=∞∑k=1(σqk+pk)ukξkhk,  t>0,

with known and the operators and such that

 (2.2) Ahk=μkhk, A0hk=νkhk,

and the real numbers are known. The numbers and are unknown and belong to an open set .

The solution of (2.1) is defined by (1.6), with

 (2.3) uk(t)=uk(0)exp(−(θμk+νk)t+(σqk+pk)ξkt).
###### Theorem 2.1.

Assume that , , , and there exists a real number such that for all and ,

 θμk+νk>C∗.

If

 (2.4) limk→∞(σqk+pk)2θμk+νk=0,

for all , then for all and .

If there exist and such that

 (2.5) (T(σqk+pk)2−4(θμk+νk))≤2¯CT,

for all and , then for all and .

###### Proof.

By (2.3),

 Eu2k(t) =u2k(0)exp(−2(θμk+νk)t+(σqk+pk)2t22) (2.6) =u2k(0)exp(−2t(θμk+νk)(1−(σqk+pk)2t4(θμk+νk))) (2.7) =u2k(0)(t2((σqk+pk)2t−4(θμk+νk))).

If (2.4) holds, then, for every , there exists such that, for all ,

 1−(σqk+pk)2t4(θμk+νk)>12,

and then (2.6) implies

 Eu2k(t)≤u2k(0)e−C∗t,  k>k(t),

concluding the proof.

If (2.5) holds, then (2.7) implies that, for all and ,

 Eu2k(t)≤u2k(0)eT¯CT,

concluding the proof.

In what follows, we assume, with no loss of generality, that .

Define

 Yk=1tlnuk(t)uk(0),k=1,…,N.

Then, for each , the random variable is Gaussian with mean and variance , and the random variables are independent.

We consider and as the two unknown parameters. The corresponding likelihood function becomes

 (2.8) LN(θ,ϑ)=exp(−N2ln(2π)−N∑k=1ln(√ϑqk+pk)−12N∑k=1(Yk+θμk+νk)2(√ϑqk+pk)2).

Direct computations produce the Fisher information matrix

 (2.9) IN =(ΨN(ϑ)00ΦN(ϑ)),   where (2.10) ΨN(ϑ) =N∑k=1μ2k(√ϑqk+pk)2,  ΦN(ϑ)=12N∑k=1q2k(ϑqk+√ϑpk)2.

Note that if for all , then . More generally, if

 limk→∞pkqk=cpq∈[0,+∞),

then .

###### Proposition 2.2.

If and for all , then the joint maximum likelihood estimator of is

 (2.11) ^θN=−∑Nk=1(μkYk+μkνk)/q2kN∑k=1(μk/qk)2,  ^ϑN=1NN∑k=1(Yk+^θNμk+νk)2q2k.

While (2.11) follows by direct computation, a lot of extra work is required to investigate the basic properties of the estimator, such as consistency and asymptotic normality, and it still will not be clear how the estimator compares with other possible estimators, for example, Bayesian. Moreover, when , no closed-form expressions for and can be found.

As a result, the main object of study becomes the local likelihood ratio

 (2.12) ZN,θ(x)=LN(θ(s),ϑ(τ))LN(θ,ϑ),  θ=(θ,ϑ), x=(s,τ),

with

 θ(s)=θ+s√ΨN(ϑ),  ϑ(τ)=ϑ+τ√ΦN(ϑ).

Then various properties of the maximum likelihood and Bayesian estimators, including consistency, asymptotic normality, and optimality, can be established by analyzing the function ; see [9, Chapters I–III].

###### Definition 2.3.

The function is called regular if the following conditions are satisfied.

1. For every compact set and sequences , in with , the representation

 (2.13) ZN,θN(sN,τN)=exp(sηN+τζN−s22−τ22+εN(θN,xN)),

holds, so that, as

, the random vector

converges in distribution to a standard bi-variate Gaussian vector and the random variable converges in probability to zero.

2. For every ,

 (2.14) limN→∞ΨN(ϑ)=limN→∞ΦN(ϑ)=+∞.
3. To state the other two conditions, define

 UN(θ)={(s,τ)∈R2:(θ+sΨ−1/2N,ϑ+τΦ−1/2N)∈Θ}.
4. For every compact , there exist positive numbers and such that, for all and ,

 (2.15) supθ∈Ksupx∈UN(θ),y∈UN(θ),|x|
5. For every compact set and every , there exists an such that

 (2.16) supθ∈KsupN>N0supx∈UN(θ)|x|p EZ1/2N,θ(x)<∞.

Conditions R1–R4 are natural modifications of conditions N1–N4 from [9, Section III.1] to our setting. In particular, R1 is known as uniform local asymptotic normality. Note that, in R3, there is nothing special about the numbers 4 and 8 except that

1. The smaller of the two numbers should be bigger than the dimension of the parameter space (cf. [9, Theorem III.1.1]);

2. In the setting (2.8), (2.12

), the larger number should be at least twice as big as the smaller number, which is related to the square root function connecting variance and standard deviation.

The next result illustrates the importance of regularity.

###### Theorem 2.4.

Assume that the function is regular. Then

1. The joint MLE of is consistent and asymptotically normal with rate , that is, as ,

converges in distribution to a standard bivariate Gaussian random vector. The estimator is asymptotically efficient with respect to loss functions of polynomial growth and, with

and from (2.13),

 limN→∞(√ΨN(ϑ)(^θN−θ)−ηN)=0,  limN→∞(√ΦN(ϑ)(^ϑN−ϑ)−ζN)=0,

in probability.

2. Every Bayesian estimator corresponding to an absolutely continuous prior on and a loss function of polynomial growth is consistent, asymptotically normal with rate , asymptotically efficient with respect to loss functions of polynomial growth, and

 limN→∞√ΨN(ϑ)(^θN−~θN)=0,  limN→∞√ΦN(ϑ)(^ϑN−~ϑN)=0

in probability.

###### Proof.

The MLE is covered by the results of [9, Section III.1]. The Bayesian estimators are covered by the results of [9, Section III.2].

Accordingly, our objective is to determine the conditions on the sequences so that the function defined by (2.12) is regular.

###### Theorem 2.5.

Assume that

 (2.17) ∞∑k=1μ2k(qk+pk)2=+∞, (2.18) ∞∑k=1q2k(qk+pk)2=+∞.

Then the function is regular.

###### Proof.

To verify condition R1 from Definition 2.3, write

 wk,N=√ϑNqk+pk,   ξk,N=Yk+θNμk+νkwk,N.

Direct computations show that (2.13) holds with

 (2.19) ηN=−1√ΨN(ϑN)N∑k=1μkξk,Nwk,N,   ζN=1√2ϑNΦN(ϑN)N∑k=1qkwk,Nξ2k,N−1√2,

and is a sum of

 ϱN=12ΦN(ϑN)N∑k=1ξ2k,Nq2kϑNw2k,N − 1

and several remainder terms coming from various Taylor expansions. By (2.18), with probability one, uniformly on compact subsets of ; cf. [16, Theorem IV.3.2]. Convergence to zero of the remainder terms is routine.

Next, let be the -the homogeneous chaos space generated by . Then equalities (2.19) imply , , , , and

 limN→∞E(η2N+ζ2N)2=8,

uniformly in . By [15, Theorem 1.1], it follows that converges in distribution to a standard bi-variate Gaussian vector and the convergence is uniform in . Condition R1 is now verified.

Assumptions (2.17) and (2.18) imply R2.

To simplify the rest of the proof, define

 wk =√ϑqk+pk,   wk(τ)=√ϑ(τ)qk+pk,   ξk=Yk+θμk+νkwk, ak =12(1−w2kw2k(τ)),   bk=wkμk√ΨNw2k(τ)s,

so that

 (2.20) ZN,θ(x)=N∏k=1(wkwk(τ))exp(−s22ΨNN∑k=1μ2kw2k(τ))exp(N∑k=1(akξ2k−bkξk)).

To verify R3, let

 GN(x)=Z1/8N,θ(x).

This is a smooth function of and , whereas each function is Hölder continuous of order : if is small compared to , then

can be arbitrarily close to zero. By the chain rule, we conclude that R3 hods for every fixed

. It remains to verify R3 uniformly in for every fixed and , and therefore we will assume from now on that is sufficiently large, and, in particular, is uniformly bounded away from zero.

By the mean value theorem,

 |GN(x)−GN(y)|≤R1/2√|x−y| |∇GN(x∗)|

and , where the two-dimensional random vector satisfies for every . By the Hölder inequality,

It follows from (2.20) that, for every and , there is an such that

 supθ∈Ksupx∈UN(θ),|x|N0(ε).

Condition R3 is now verified.

To verify R4, note that, for a standard Gaussian random variable ,

 Eeaξ2−bξ=e−b2/(4a)Eea(ξ−(b/2a))2=(1−2a)−1/2eb2/(2−4a);

cf. [13, Proposition 6.2.31]. Then

 EZ1/2N,θ(x)=⎡⎢⎣N∏k=1(2wkwk(τ)w2k+w2k(τ))1/2⎤⎥⎦exp(−s24ΨNN∑k=1μ2kw2k+w2k(τ)).

To study , denote by a number that does not depend on and ; the value of can be different in different places. For ,

 |s|pe−rs2≤(p2r)p/2,

so that

 |s|pexp(−s24ΨNN∑k=1μ2kw2k+w2k(τ)) ≤C(ΨN∑Nk=1μ2k/(w2k+w2k(τ)))p/2 ≤C(max(1,τ))p/2;

the last inequality follows from the definitions of and . Writing

 FN(τ)=|τ|qN∏k=1(2wkwk(τ)w2k+w2k(τ))1/2,

the objective becomes to show that, for fixed and all sufficiently large ,

 maxτ>−ϑ√ΦN(ϑ)FN(τ)<∞,

which, in turn, follows by noticing that

 argmaxτ>−ϑ√ΦN(ϑ)FN(τ)=2√q+O(Φ−1/2N(ϑ)), N→∞,

and

 limN→∞FN(2√q)=(4q)q/2e−q/2.

Condition R4 is now verified, and Theorem 2.5 is proved.

Taking , we recover the familiar problem of joint estimation of mean and variance in a normal population. Because the Fisher information matrix is diagonal, violation of one of the conditions of the theorem still leads to a regular statistical model for the other parameter. For example, if (2.17) holds but (2.18) does not, then is not identifiable, but is, and the local likelihood ratio is regular, as a function of one variable.

Conditions (2.4) and (2.17) serve different purposes: (2.4) ensures that (2.1) has a global-in-time solution in , whereas (2.17) implies regularity of the estimation problem for based on the observations (multi-channel model) , In general, (2.4) and (2.17) are not related: with , condition (2.4) holds, but (2.17) does not; taking we satisfy (2.17) but not (2.4) [and not even (2.5)], and the resulting multi-channel model, while regular in statistical sense, does not correspond to any stochastic evolution equation.

Condition (2.18) means that the numbers are not too big compared to ; for example,

 (2.21) limsupk→∞pk√kqk<+∞

is sufficient for (2.18) to hold.

By a theorem of Kakutani [10], (2.17) is equivalent to singularity of the measures

 (2.22) ∏k≥1N(−(θμk+νk),(σqk+pk)2)

on for different values of , and (2.18) is equivalent to singularity of the measures (2.22) on for different values of . In other words, the conditions of Theorem 2.4 are in line with the general statistical paradigm that a consistent estimation of a parameter is possible when, in the suitable limit, the measures corresponding to different values of the parameter are singular.

A similar shell model, but with space-time noise, is considered in [1], where the observations are

 (2.23) duk(t)+(θμk+νk)uk(t)dt=σqkuk(t)dwk(t), t∈[0,T],

and are i.i.d. standard Brownian motions. Continuous in time observations make it possible to determine exactly from the quadratic variation process of , so, with no loss of generality, we set . Conditions (2.4) and (2.17) become, respectively,

 (2.24) supk≥1(q2k2−(θμk+νk))<+∞

and

 (2.25) ∑k≥1μ2kq2k=+∞.

An earlier paper [8] studies

 (2.26) duk(t)+(θμk+νk)uk(t)dt=qkdwk(t), t∈[0,T];

now, assuming , conditions (2.4) and (2.17) become, respectively,

 (2.27) ∑k≥1q2kθμk+νk<∞

and

 (2.28) ∑k≥1μ2k(θμk+νk)2=+∞.

Similar to [8], set (and, in (2.1), also ), and assume that the operators and from (2.2) are self-adjoint elliptic of orders and respectively, in a smooth bounded domain in . It is known [17] that, as ,

 θμk+νk∼k2m/d, μk∼km1/d,

and so

• conditions (2.4), (2.17), (2.24), and (2.25) always hold;

• condition (2.27) holds if ;

• condition (2.28) holds if .

More generally, if the sequences and are bounded, then (2.27) implies (2.4), and (2.4) implies (2.24); whereas (2.17) and (2.25) are equivalent and both follow from (2.28). In other words, the space-time shell model (2.23) admits a global-in-time solution in and leads to a regular statistical model under the least restrictive conditions, and the model with additive noise (2.26) requires the most restrictive conditions.

In this section we study the parameter estimation problem for (1.5), driven by a space-only additive noise. We consider two observation schemes, starting with the assumption that the observations occur in the Fourier domain (similarly to shell model). Under the second observation scheme, exploring the special structure of the equation, we assume that the observer measures the derivative of the solution in the physical space, at one fixed time point and over a uniform space grid.

Existence, uniqueness, and continuous dependence on the initial condition for equation (1.5) follow directly from (1.8).

###### Theorem 3.1.

If and

 (3.1) ∞∑k=1q2kμ2k<∞,

then the solution of (1.5) satisfies for every , and

 E∥u(t)∥2H≤∥u(0)∥2H+σ2θ2∞∑k=1q2kμ2k.

### 3.1. Observations in Fourier Domain

Consider equation (1.5). Define

 Uk(t)=uk(t)−uk(0), Sk(t)=1−e−θμkt,  Fa,b(x)=1−e−ax1−e−bx, a>b>0.

The function is decreasing on . Indeed, note that for any , the function

 y↦1−yp1−y

is increasing on , and hence, by taking , the monotonicity of follows at once.

###### Theorem 3.2.

For every and every

 θμk=F−1t2,t1(Uk(t2)Uk(t1)).
###### Proof.

By (1.8),

 (3.2) Uk(t)=(σqkξkθμk−uk(0))Sk(t)

and then

 Uk(t2)Uk(t1)=Ft2,t1(θμk),

and since is increasing, the inverse function exists. The proof is complete.

It turns out that making a third measurement of at another specially chosen time, or by taking , eliminates the need to invert the function .

###### Theorem 3.3.

For every and every

 θμk=1t1lnUk(t2−t1)Uk(t2)−Uk(t1).
###### Proof.

By (3.2),

 Uk(t2)−Uk(t1)=(σqkξkθμk−uk(0))(Sk(t2)−Sk(t1))=(σqkξkθμk−uk(0))e−θμkt1Sk(t2−t1),

whereas

 Uk(t2−t1)=(σqkξkθμk−uk(0))Sk(t2−t1).

###### Remark 3.4.

It is not at all surprising that the quantity can be determined exactly: for every fixed

and every collection of time moments

, the support of the Gaussian vector in is a line. As a result, the measures corresponding to different values of are singular, being supported on different lines. In this regard, the situation is similar to time-only noise model considered in [4].

To estimate , define

 Xk=θμkUk(t)qkSk(t),k=1,…,N

so that, for all , the random variables