# Inference for ergodic diffusions plus noise

We research adaptive maximum likelihood-type estimation for an ergodic diffusion process where the observation is contaminated by noise. This methodology leads to the asymptotic independence of the estimators for the variance of observation noise, the diffusion parameter and the drift one of the latent diffusion process. Moreover, it can lessen the computational burden compared to simultaneous maximum likelihood-type estimation. In addition to adaptive estimation, we propose a test to see if noise exists or not, and analyse real data as the example such that data contains observation noise with statistical significance.

• 6 publications
• 16 publications
11/13/2017

### Adaptive estimation and noise detection for an ergodic diffusion with observation noises

We research adaptive maximum likelihood-type estimation for an ergodic d...
06/25/2018

06/24/2019

### Parametric estimation for convolutionally observed diffusion processes

We propose a new statistical observation scheme of diffusion processes n...
05/16/2018

### Perspective Maximum Likelihood-Type Estimation via Proximal Decomposition

We introduce an optimization model for maximum likelihood-type estimatio...
04/05/2018

### Adaptive test for ergodic diffusions plus noise

We propose some parametric tests for ergodic diffusion-plus-noise model,...
03/15/2019

### Parametric estimation for a signal-plus-noise model from discrete time observations

This paper deals with the parametric inference for integrated signals em...
03/29/2021

### Inference in the stochastic Cox-Ingersol-Ross diffusion process with continuous sampling: Computational aspects and simulation

In this paper, we consider a stochastic model based on the Cox- Ingersol...

## 1 Introduction

We consider a -dimensional ergodic diffusion process defined by the following stochastic differential equation

 dXt=b(Xt,β)dt+a(Xt,α)dwt, X0=x0, (1)

where is a -dimensional standard Wiener process, is a

-valued random variable independent of

, , with and being compact and convex. Moreover, , are known functions. We denote and as the true value of which belongs to .

We deal with the problem of parametric inference for with defined by the following model

 Yihn=Xihn+Λ1/2εihn, i=0,…,n, (2)

where is the discretisation step, is a positive semi-definite matrix and is an i.i.d. sequence of -valued random variables such that , , and each component is independent of other components, and . Hence the term indicates the exogenous noise. Let be the convex and compact parameter space such that and be the true value of such that , where is the half-vectorisation operator. We denote and . With respect to the sampling scheme, we assume that and as .

Our main concern with these settings is the adaptive maximum likelihood (ML)-type estimation scheme in the form of ,

 Hτ1,n(^αn|^Λn) =supα∈Θ1Hτ1,n(α|^Λn), (3) H2,n(^βn|^αn) =supβ∈Θ2H2,n(β|^αn), (4)

where for any matrix , indicates the transpose of , and are quasi-likelihood functions, which are defined in Section 3.

The composition of the model above is quite analogous to that of discrete-time state space models (e.g., see [19]) in terms of expression of endogenous perturbation in the system of interest and exogenous noise attributed to observation separately. As seen in the assumption , this model that we consider is for the situation where high-frequency observation holds, and this requirement enhances the flexibility of modelling since our setting includes the models with non-linearity, dependency of the innovation on state space itself. In addition, adaptive estimation which also becomes possible through the high-frequency setting has the advantage in easing computational burden in comparison to simultaneous one. Fortunately, the number of situations where requirements are satisfied has been grown gradually, and will continue to soar because of increase in the amount of real-time data and progress of observation technology these days.

The idea of modelling with diffusion process concerning observational noise is no new phenomenon. For instance, in the context of high-frequency financial data analysis, the researchers have addressed the existence of ”microstructure noise” with large variance with respect to time increment questioning the premise that what we observe are purely diffusions. The energetic research of the modelling with ”diffusion + noise” has been conducted in the decade: some research have examined the asymptotics of this model in the framework of fixed time interval such that (e.g., [9], [10], [12], [20] and [18]); and [3] and [4] research the parametric inference of this model with ergodicity and the asymptotic framework . For parametric estimation for discretely observed diffusion processes without measurement errors, see [5], [24], [25], [2], [14] and references therein.

Our research is focused on the statistical inference for an ergodic diffusion plus noise. We give the estimation methodology with adaptive estimation that relaxes computational burden and that has been researched for ergodic diffusions so far (see [24], [25], [13], [21], [22]) in comparison to the simultaneous estimation of [3] and [4]. In previous researches the simultaneous asymptotic normality of , and has not been shown, but our method allows us to see asymptotic normality and asymptotic independence of them with the different convergence rates. Our methods also broaden the applicability of modelling with stochastic differential equations since it is more robust for the existence of noise than the existent results in discretely observed diffusion processes with ergodicity not concerning observation noise.

As the real data analysis, we analyse the 2-dimensional wind data [17] and try to model the dynamics with 2-dimensional Ornstein-Uhlenbeck process. We utilise the fitting of our diffusion-plus-noise modelling and that of diffusion modelling with estimation methodology called local Gaussian approximation method (LGA method) which has been investigated for these decades (for instance, see [24], [13] and [14]). The result (see Section 5) seems that there is considerable difference between these estimates: however, we cannot evaluate which is the more trustworthy fitting only with these results. It results from the fact that we cannot distinguish a diffusion from a diffusion-plus-noise; if , then the observation is not contaminated by noise and the estimation of LGA should be adopted for its asymptotic efficiency; but if

, what we observe is no more a diffusion process and the LGA method loses its theoretical validity. Therefore, it is necessary to compose the statistical hypothesis test with

and . In addition to estimation methodology, we also research this problem of hypothesis test and propose a test which has the consistency property.

In Section 2, we gather the assumption and notation across the paper. Section 3 gives the main results of this paper. Section 4 examines the result of Section 3 with simulation. In Section 5 we analyse the real data for wind velocity named MetData with our estimators and LGA as discussed above and test whether noise does exist.

## 2 Local means, notations and assumptions

### 2.1 Local means

We partition the observation into blocks containing observations and examine the property of the following local means such that

 ¯Zj=1pnpn−1∑i=0ZjΔn+ihn, j=0,…,kn−1, (5)

where is an arbitrary sequence of random variables on the mesh as , and ; and . Note that and .

In the same way as [3] and [4], our estimation method is based on these local means with respect to the observation . The idea is so straightforward; taking means of the data in each partition should reduce the influence of the noise term

because of the law of large numbers and then we will obtain the information of the latent process

.

We show how local means work to extract the information of the latent process. The first plot on next page (Figure 3) is the simulation of a 1-dimensional Ornstein-Uhlenbeck process such that

 dXt=−(Xt−1)dt+dwt,X0=1 (6)

where , and

. Secondly, we contaminate the observation with normally-distributed noise

and and plot the observation on next page (Figure 3). Finally we make the sequence of local means where and plot at the bottom of the next page (Figure 3).

With these plots, it seems that the local means recover rough states of the latent processes, and actually it is possible to compose the quantity which converges to each state on the mesh for Proposition 6 with the assumptions below.

### 2.2 Notations and assumptions

We set the following notations.

1. For a matrix , denotes the transpose of and . For same size matrices and , .

2. For any vector

, denotes the -th component of . Similarly, , and denote the -th component, the -th row vector and -th column vector of a matrix respectively.

3. For any vector , , and for any matrix , .

4. is a positive generic constant independent of all other variables. If it depends on fixed other variables, e.g. an integer , we will express as .

5. and .

6. Let us define .

7. A -valued function on is a polynomial growth function if for all ,

 |f(x)|≤C(1+∥x∥)C.

is a polynomial growth function uniformly in if for all ,

 supθ∈Θ|g(x,θ)|≤C(1+∥x∥)C.

Similarly we say is a polynomial growth function uniformly in if for all ,

 supϑ∈Ξ|h(x,ϑ)|≤C(1+∥x∥)C.

## 3 Proofs

We give the proofs of the main theorems discussed above and some preliminary ones. Some of them are also discussed in [15] with details.

We set some notations which only appear in the proof section.

1. Let us denote some -fields such that , , , .

2. We define the following -valued random variables which appear in the expansion:

 ζj+1,n =1pnpn−1∑i=0∫(j+1)ΔnjΔn+ihndws, ζ′j+2,n=1pnpn−1∑i=0∫(j+1)Δn+ihn(j+1)Δndws.
3. .

4. We set the following empirical functionals:

 ¯Mn(f(⋅,ϑ)) :=1knkn−1∑j=0f(¯Yj,ϑ), ¯Dn(f(⋅,ϑ)) :=1knΔnkn−2∑j=1f(¯Yj−1,ϑ)(¯Yj+1−¯Yj−Δnb(¯Yj−1)), ¯Qn(B(⋅,ϑ))
5. Let us define , and for , and .

6. We denote

 {Bκ∣∣κ=1,…,m1, Bκ=(B(j1,j2)κ)j1,j2}, {fλ∣∣λ=1,…,m2, fλ=(f(1)λ,…,f(d)λ)},

which are sequences of -valued functions and -valued ones such that the components of themselves and their derivatives with respect to are polynomial growth functions for all and .

7. Let us define

 {Bκ,n(x)∣∣κ=1,…,m1, Bκ,n=(B(j1,j2)κ,n)j1,j2},

which is a family of sequences of the functions such that the components of the functions and their derivatives with respect to are polynomial growth functions and there exist a -valued sequence s.t. and such that for all and for the sequence discussed above,

 m1∑κ=1∥Bκ,n(x)−Bκ(x)∥≤vn(1+∥x∥C).
8. Denote

 W(τ)({Bκ}κ,{fλ}λ):=⎡⎢ ⎢⎣W1OOOWτ2({Bκ}κ)OOOW3({fλ}λ)⎤⎥ ⎥⎦.

### 3.1 Conditional expectation of supremum

The following two propositions are multidimensional extensions of Proposition 5.1 and Proposition A in [7] respectively.

###### Proposition 1.

Under (A1), for all , there exists a constant such that for all ,

 E[sups∈[t,t+1]∥Xs∥k∣∣ ∣∣Gt]≤C(k)(1+∥Xt∥k).
###### Proposition 2.

Under (A1) and for a function whose components are in , assume that there exists such that

 ∥∥f′(x)∥∥ ≤C(1+∥x∥)C.

Then for any ,

 E[sups∈[jΔn,(j+1)Δn]∥∥f(Xs)−f(XjΔn)∥∥k∣∣ ∣∣Gnj]≤C(k)Δk/2n(1+∥∥XjΔn∥∥C(k)).

Especially for ,

 E[sups∈[jΔn,(j+1)Δn]∥∥Xs−XjΔn∥∥k∣∣ ∣∣Gnj]≤C(k)Δk/2n(1+∥∥XjΔn∥∥k).

The next proposition summarises some results useful for computation.

###### Proposition 3.

Under (A1), for all where there exists a constant such that and , we have

 (i) sups1,s2∈[t1,t2]∥∥E[b(Xs1)−b(Xs2)∣∣Gt1]∥∥≤C(t2−t1)(1+∥∥Xt1∥∥3), (ii) sups1,s2∈[t1,t2]∥∥E[a(Xs1)−a(Xs2)∣∣Gt1]∥∥≤C(t2−t1)(1+∥∥Xt1∥∥3), (iii) ∥∥ ∥∥E[∫t3t2(b(Xs)−b(Xt2))ds∣∣∣Gt1]∥∥ ∥∥≤C(t3−t2)2(1+E[∥∥Xt2∥∥6∣∣Gt1])1/2, (iv) E[∥∥∥∫t3t2(b(Xs)−b(Xt2))ds∥∥∥l∣∣ ∣∣Gt1]≤C(l)(t3−t2)3l/2(1+E[∥∥Xt2∥∥2l∣∣Gt1]), (v) E⎡⎣∥∥ ∥∥∫t2t1(∫st1(a(Xu)−a(Xt1))dwu)ds∥∥ ∥∥l∣∣ ∣∣Gt1⎤⎦≤C(l)(t2−t1)2l(1+∥∥Xt1∥∥2l).
###### Proof.

(i), (ii): Let be the infinitesimal generator of the diffusion process. Since Ito-Taylor expansion, for all ,

 E[b(Xs)|Gt1]=b(Xt1)+∫st1E[Lb(Xu)|Gt1]du,

and the second term has the evaluation

 sups∈[t1,t2]∥∥∥∫st1E[Lb(Xu)|Gt1]du∥∥∥≤C(t2−t1)(1+∥∥Xt1∥∥3).

Therefore, we have (ii) and identical revaluation holds for (ii).
(iii): Using (i) and Hölder’s inequality, we have the result.
(iv): Because of Proposition 2 and Hölder’s inequality, We obtain the proof.
(v): For convexity, we have

 E⎡⎣∥∥ ∥∥∫t2t1(∫st1(a(Xu)−a(Xt1))dwu)ds∥∥ ∥∥l∣∣ ∣∣Gt1⎤⎦

Hölder’s inequality, Fubini’s theorem, BDG theorem and Proposition 2 give the result. ∎

### 3.2 Propositions for ergodicity and evaluations of expectation

The next result is a multivariate version of [14] or [8] using Proposition 1.

###### Lemma 4.

Assume (A1)-(A3) hold. Let be a function in and assume that , the components of and are polynomial growth functions uniformly in . Then the following convergence holds:

 1knkn−1∑j=0f(XjΔn,ϑ)P→ν0(f(⋅,ϑ)) uniformly in ϑ.

### 3.3 Characteristics of local means

The following propositions, lemmas and corollary are multidimensional extensions of those in [7] and [3].

###### Lemma 5.

and are -measurable, independent of and Gaussian. These random variables have the following decomposition:

 ζj+1,n =1pnpn−1∑k=0(kn+1)∫Ij,kdwt, ζ′j+1,n =1pnpn−1∑k=0(pn−1−k)∫Ij,kdwt.

In addition, the evaluation of the following conditional expectations holds:

 E[ζj,n∣∣Gnj]=E[ζ′j+1,n∣∣Gnj] =0, E[ζj+1,n(ζj+1,n)T∣∣Gnj] =mnΔnIr, E[ζ′j+1,n(ζ′j+1,n)T∣∣∣Gnj] =m′nΔnIr, E[ζj+1,n(ζ′j+1,n)T∣∣∣Gnj] =χnΔnIr,

where , and .

For the proof, see Lemma 8.2 in [3] and extend it to multidimensional discussion.

###### Proposition 6.

Under (A1), (AH), assume the component of the function on , and are polynomial growth functions uniformly in . Then there exists such that for all and ,

 ∥∥E[f(¯Yj,ϑ)−f(XjΔn,ϑ)∣∣Hnj]∥∥≤CΔn(1+∥∥XjΔn∥∥C).

Moreover, for all ,

 E[∥∥f(¯Yj,ϑ)−f(XjΔn,ϑ)∥∥l∣∣Hnj]≤C(l)Δl/2n(1+∥∥XjΔn∥∥C(l)).

The proof is almost identical to that of Corollary 3.3 in [3] except for dimension, but it does not influence the evaluation.

###### Proposition 7.

Under (A1) and (AH),

 ¯Yj+1−¯Yj−Δnb(¯Yj)=a(XjΔn)(ζj+1,n+ζ′j+2,n)+ej,n+Λ1/2⋆(¯εj+1−¯εj),

where is a -measurable random variable such that there exists and for all satisfying the inequalities

 E[∥∥ej,n∥∥l∣∣Hnj]≤C(l)Δln(1+∥∥XjΔn∥∥3l),

For the proof, see that of Proposition 3.4 in [3] and extend the discussion to multidimensional one.

###### Corollary 8.

Under (A1) and (AH),

 ¯Yj+1−¯Yj−Δnb(XjΔn)=a(XjΔn)(ζj+1,n+ζ′j+2,n)+ej,n+Λ1/2⋆(¯εj+1−¯εj),

where is a -measurable random variable such that there exists and for all satisfying the inequalities

 E[∥∥ej,n∥∥l∣∣Hnj]≤C(l)Δln(1+∥∥XjΔn∥∥3l),
###### Proof.

It is enough to see satisfies the evaluation for . Corollary 6 and Proposition 7 give

 ∥∥E[Δnb(¯Yj)−Δnb(XjΔn)∣∣Hnj]∥∥≤CΔ2n(1+∥∥XjΔn∥∥5), E[∥∥Δnb(¯Yj)−Δnb(XjΔn)∥∥l∣∣Hnj]≤C(l)Δln(1+∥∥XjΔn∥∥3l).

With respect to the third evaluation, Hölder’s inequality verifies the result. ∎

The following lemma summarises some useful evaluations for computation.

###### Lemma 9.

Assume is a function whose components are in and the components of and are polynomial growth functions in . In addition, denotes a function whose components are in and that the components of and are polynomial growth functions. Under (A1), (A3), (A4) and (AH), the following uniform evaluation holds:

 (i) ∀l1,l2∈N0, supj,nE[supϑ∈Ξ∥∥f(¯Yj−1,ϑ)∥∥l1(1+∥∥XjΔn∥∥)l2]≤C(l1,l2), (ii) (iii) (iv) (v) (vi) (vii) ∀l∈N, supj,n⎛⎜ ⎜⎝E[∥∥¯εj∥∥l]Δl/2n⎞⎟ ⎟⎠≤C(l).
###### Proof.

Simple computations and the results above lead to the proof. ∎

### 3.4 Uniform law of large numbers

The following propositions and theorems are multidimensional version of [3].

###### Proposition 10.

Assume is a function in and , the components of , and are polynomial growth functions uniformly in . Under (A1)-(A4), (AH),

 ¯Mn(f(⋅,ϑ))P→ν0(f(⋅,ϑ)) uniformly in ϑ.

The proof is almost same as Proposition 4.1 in [3].

###### Theorem 11.

Assume is a function in and the components of , , and are polynomial growth functions uniformly in . Under (A1)-(A4), (AH),

 ¯Dn(f(⋅,ϑ))P→0 uniformly % in ϑ.
###### Proof.

We define the following random variables:

 Vnj(ϑ) :=f(¯Yj−1,ϑ)(¯Yj+1−¯Yj−Δnb(¯Yj)), ~Dn(f(⋅,ϑ)) :=1knΔnkn−2∑j=1Vnj(ϑ)

and then

 ¯Dn(f(⋅,ϑ)) =~Dn(⋅,ϑ)+1knkn−2∑j=1f(¯Yj−1,ϑ)(b(¯Yj)−b(¯Yj−1)).

Hence it is enough to see the uniform convergences in probability of the first term and the second one in the right hand side.

In the first place, we consider the first term of the right hand side above. We can decompose the sum of as follows:

 kn−2∑j=1Vnj(ϑ)=∑1≤3j≤kn−2Vn3j(ϑ)+∑1≤3j+1≤kn−2Vn3j+1(ϑ)+∑1≤3j+2≤kn−2Vn3j+2(ϑ).

To simplify notations, we only consider the first term of the right hand side and the other terms have the identical evaluation. Let us define the following random variables:

 v(1)3j,n(ϑ) :=f(¯Y3j−1,ϑ)a(X3jΔn)(ζ3j+1,n+ζ′3j+2,n), v(2)3j,n(ϑ) :=f(¯Y3j−1,ϑ)Λ1/2⋆(¯ε3j+1−¯ε3j), v(3)3j,n(ϑ) :=f(¯Y3j−1,ϑ)e3j,n,

and recall Proposition 7 which states

 ¯Yj+1−¯Yj−Δnb(¯Yj)=a(XjΔn)(ζj+1,n+ζ′j+2,n)+ej,n+Λ1/2⋆(¯εj+1−¯εj).

Therefore we have

 Vn3j(ϑ)=v(1)3j,n(ϑ)+v(2)3j,n(ϑ)+v(3)3j,n(ϑ).

First of all, the pointwise convergence to 0 for all and we abbreviate as . Since is -measurable and hence -measurable, the sequence of random variables are -adopted, and hence it is enough to see

 1knΔn∑1≤3j≤kn−2E[Vn3j∣∣Hn3(j−1)+3