The Ornstein-Uhlenbeck process is described by the following Langevin equation:
where so that the process is ergodic and where for simplicity of the presentation we assume . Other initial value can be treated exactly in the same way. We assume that the process is observed at discrete time instants and we want to use the observations to estimate the parameters , and appeared in the above Langevin equation simultaneously.
Before we continue let us briefly recall some recent relevant works obtained in literature. Most of the works are concerned with the estimator of the drift parameter . When the Ornstein-Uhlenbeck process can be observed continuously and when the parameters and are assumed to be known, we have the following works.
The maximum likelihood estimator for defined by is studied  (see also the references therein for earlier references), and is proved to be strongly consistent. The asymptotic behavior of the bias and the mean square of is also given. In this paper, a strongly consistent estimator of is also proposed.
Usually in reality the process can only be observed at discrete times for some fixed observation time lag . In this very interesting case, there are very limited works. Let us only mention two ([8, 11]). To the best of knowledge there is only one work  to estimate all the parameters , and in the same time, where the observations are assumed to be made continuously.
The diffusion coefficient represents the “volatility” and it is commonly believed that it should be computed (hence estimated) by the variations (see  and references therein). To use the variations one has to assume the process can be observed continuously (or we have high frequency data). Namely, it is a common belief that can only be estimated when one has high frequency data.
In this work, we assume that the process can only be observed at discrete times for some fixed observation time lag (without the requirement that ). We want to estimate , and simultaneously. The idea we use is the ergodic theorem, namely, we find the explicit form of the limit distribution of and use it to estimate our parameters. People may naturally think that if we appropriately choose three different , then we may obtain three different equations to obtain all the three parameters , and .
However, this is impossible since as long as we proceed this way, we shall find out that whatever we choose , we cannot get independent equations. Motivated by a recent work , we may try to add the limit distribution of to find all the parameters. However, this is still impossible because regardless how we choose and we obtain only two independent equations. This is because regardless how we choose and the limits depends only on the covariance of the limiting Gaussians (see and ulteriorly). Finally, we propose to use the following quantities to estimate all the three parameters , and :
We shall study the strong consistence and joint limiting law of our estimators.
The paper is organized as follows. In Section 2, we recall some known results. The construction and the strong consistency of the estimators are provided in Section 3. Central limit theorems are obtained in Section 4. To make the paper more readable, we delay some proofs in Append A. To use our estimators we need the determinant of some functions to be nondegenerate. This is given in Appendix B. Some numerical simulations to validate our estimators are illustrated in Appendix C.
be a complete probability space. The expectation on this space is denoted by. The fractional Brownian motion with Hurst parameter is a zero mean Gaussian process with the following covariance structure:
On stochastic analysis of this fractional Brownian motion, such as stochastic integral , chaos expansion, and stochastic differential equation we refer to .
For any , we define
We can first extend this scalar product to general elementary functions by (bi-)linearity and then to general function by a limiting argument. We can then obtain the reproducing kernel Hilbert space, denoted by , associated with this Gaussian process (see e.g.  for more details).
Let be the space of smooth and cylindrical random variables of the form
where and . For such a variable , we define its Malliavin derivative as the valued random element:
Let be a sequence of random variables in the space of th Wiener Chaos, ,such that . Then the following statements are equivalent:
converges in law to as tends to infinity.
converges in to a constant as n tends to infinity.
3. Estimators of , and
If , then the solution to (1.1) can be expressed as
The associated stationary solution, the solution of (1.1) with the the initial value
can be expressed as
and has the same distribution as the limiting normal distribution of(when ). Let’s consider the following two quantities :
Now we want to have a similar result for . First, let’s study the ergodicity of the processes . According to , a centered Gaussian wide-sense stationary process is ergodic if as tends to infinity. We shall apply this result to . Obviously, it is a centered Gaussian stationary process and
In [5, Theorem 2.3], it is proved that as goes to infinity. Thus, it is easy to see that . Hence, we see that the process is ergodic. This implies
This combined with (3.9) yields the following Lemma.
Let , and be defined by (3.8). Then as we have
From the above theorem we propose the following construction for the estimators of the parameters , and .
First let us define
and let .
Then we set
This is a system of three equations for three unknowns . If the determinant of the Jacobian (of )
Or we can write as
where is the inverse function of (if it exists) and
We shall use to estimate the parameters . We call the ergodic (or generalized moment) estimator of .
It seems hard to explicitly obtain the explicit solution of the system of equation (3.14). However, it is a classical algebraic equations. There are numerous numeric approaches to find the approximate solution. We shall give some validation of our estimators numerically in Appendix C.
Since is a continuous function of the inverse function is also continuous if it exists. Thus we have the following a strong consistency result which is an immediate consequence of Theorem 3.1.
Assume (3.14) has a unique solution . Then converge almost surely to respectively as tends to infinity.
4. Central limit theorem
In this section, we shall concern with the central limit theorem associated with our ergodic estimator . We shall prove that
converge in law to a mean zero normal vector.
Let’s first consider the random variable defined by
Our first goal is to show that converges in law to a multivariate normal distribution using Proposition 2.1. So we consider a linear combination:
and show that it converges to a normal distribution.
We will use the following Feynman diagram formula , where interested readers can find a proof.
Let be jointly Gaussian random variables with mean zero. Then
An immediate consequence of this result is
Let be jointly Gaussian random variables with mean zero. Then
It is easy from the following proof to see that all entries of the covariance matrix are finite.
It is clear that we can use to replace .
We may use (3.10)-(LABEL:e.3.a8) to compute explicitly.
Proof We write
where is a symmetric matrix given by
It is easy to observe that
the limits of , , and are the same;
the limits of , and are the same;
the limit of can be obtained from the limit of by replacing by ;
the matrix is symmetric.
Thus, we only need to compute the limits of and .
By Lemma A.2, we see that
This proves (4.24)
By Lemma A.3, we have
Since converges to a normal for any , , and , we know by the Cramér-Wold theorem that converges to a mean zero Gaussian random vector, proving the theorem.
Now using the delta method and the above Theorem 4.3 we immediately have the following theorem.
-  (2008) Stochastic calculus for fractional Brownian motion and applications. Probability and its Applications (New York), Springer-Verlag London, Ltd., London. External Links: Cited by: §2.
-  (2013) Parameter estimation for the discretely observed fractional ornstein–uhlenbeck process and the yuima r package. Computational Statistics 28 (4), pp. 1529–1547. Cited by: §1.
-  (2017) Parameter estimation of complex fractional Ornstein-Uhlenbeck processes with fractional noise. ALEA Lat. Am. J. Probab. Math. Stat. 14 (1), pp. 613–629. External Links: Cited by: item 2..
-  (in press) Generalized moment estimation for ornstein-uhlenbeck processes driven by -stable lévy motions from discrete time observations.. Statistical Inference for Stochastic Processes (), pp. . Cited by: §1.
-  (2003) Fractional ornstein-uhlenbeck processes. Electronic Journal of probability 8. Cited by: Appendix A, §3.
-  (2019) Parameter estimation for fractional Ornstein-Uhlenbeck processes of general Hurst parameter. Stat. Inference Stoch. Process. 22 (1), pp. 111–142. External Links: Cited by: item 2., §1.
-  (2010) Parameter estimation for fractional ornstein–uhlenbeck processes. Statistics & probability letters 80 (11-12), pp. 1030–1038. Cited by: Appendix A, item 2., §2.
-  (2013) Parameter estimation for fractional ornstein–uhlenbeck processes with discrete observations. In Malliavin calculus and stochastic analysis, pp. 427–442. Cited by: §C.2, §1, §3, item (2).
-  (2017) Analysis on Gaussian spaces. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ. External Links: Cited by: §2, §4.
-  (2011) Ergodic properties of anomalous diffusion processes. Annals of Physics 326 (9), pp. 2431–2443. Cited by: §3.
-  (2019) A general drift estimation procedure for stochastic differential equations with additive fractional noise. arXiv preprint arXiv:1903.10769. Cited by: §1.
-  (2007) Statistical aspects of the fractional stochastic calculus. The Annals of Statistics 35 (3), pp. 1183–1212. Cited by: item 1..
Appendix A Detailed computations
First, we need a lemma from [7, supplementary data, Lemma 5.4, Equation (5.7)].
Let be the Ornstein-Uhlenbeck process defined by (1.1). Then
The above inequality also holds true for .
Let be defined by (1.1). When we have
Proof To simplify notations we shall use , to represent , etc. From the relation (3.7) it is easy to see that
where , , denote the above -th term.
Let us first consider for . First, we consider . By [5, Theorem 2.3], we know that converges to when . Thus by the Toeplitz theorem, we have
Exactly in the same way we have
When , we have easily
Now we have
When one of the or is not equal to , we have by the Hölder inequality
which will go to if we can show is bounded. In fact, we have