LAMN in a class of parametric models for null recurrent diffusion

11/06/2017
by   Reinhard Höpfner, et al.
University of Mainz
0

We study statistical models for one-dimensional diffusions which are recurrent null. A first parameter in the drift is the principal one, and determines regular varying rates of convergence for the score and the information process. A finite number of other parameters, of secondary importance, introduces additional flexibility for the modelization of the drift, and does not perturb the null recurrent behaviour. Under time-continuous observation we obtain local asymptotic mixed normality (LAMN), state a local asymptotic minimax bound, and specify asymptotically optimal estimators.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

03/14/2018

Maximum likelihood drift estimation for a threshold diffusion

We study the maximum likelihood estimator of the drift parameters of a s...
04/07/2022

Asymptotically Efficient Estimation of Ergodic Rough Fractional Ornstein-Uhlenbeck Process under Continuous Observations

We consider the problem of asymptotically efficient estimation of drift ...
05/01/2021

Local Asymptotic Mixed Normality via Transition Density Approximation and an Application to Ergodic Jump-Diffusion Processes

We study sufficient conditions for local asymptotic mixed normality. We ...
02/13/2018

On Double Smoothed Volatility Estimation of Potentially Nonstationary Jump-Diffusion Model

In this paper, we present the double smoothed nonparametric approach for...
08/03/2021

Adaptive estimation for small diffusion processes based on sampled data

We consider parametric estimation for multi-dimensional diffusion proces...
10/27/2021

Nonparametric Estimation for SDE with Sparsely Sampled Paths: an FDA Perspective

We consider the problem of nonparametric estimation of the drift and dif...
03/08/2019

Local asymptotic normality for shape and periodicity of a signal in the drift of a degenerate diffusion with internal variables

Taking a multidimensional time-homogeneous dynamical system and adding a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 The setting

We discuss the probabilistic background for the diffusion (5) and specify the assumptions which will in force throughout the paper. Assumptions and results are stated in a first subsection, proofs and additional remarks in a second one.

1.1 Assumptions, null recurrence, likelihoods, some estimators

Throughout this paper, we consider the model (5), with and the associated parameter given by

(7)

The functions in equation (5) satisfy the following.

Assumption 1: The functions in (5) are Lipschitz continuous and such that
i) finite limits do exist for as , and are denoted by ;
ii) for arbitrarily small, ;
iii) the functions occurring in (5) are linearly independent in the following sense:
for open in and real constants , any representation on implies .

Note that the functions or considered in example 1 above do satisfy all requirements of assumption 1: iii) holds in both cases or , ii) is obvious, and elementary arguments (such as for all from [He 93] Ch. 87) allow to check i). We put

(8)

Lemma 1: Under assumption 1, the diffusion (5) is recurrent in the sense of Harris for every . With notation (8), the invariant measure of , unique up to constant multiples, is given by

on . Here , thus null recurrence holds for every .

We strengthen that the Lebesgue density of the invariant measure varies regularly as and as . The index of regular variation , the same on the left and the right branch, depends on only and ranges over the interval . We have no regular variation for the functions in the drift, cf. example (6), whereas they contribute to asymptotic constants in virtue of assumption 1 i). We have choosen as the maximal open interval on which null recurrence holds.

Throughout this paper, the starting point for the diffusion (5) does not depend on and will be fixed. From now on, these assumptions and notations will remain in force (and we omit to recall this in the results below).

For the theoretical background of what follows we refer to the classical books Liptser and Shiryaev [LS 78] and Jacod and Shiryaev [JS 87]; see also Ibragimov and Khasminskii [IH 81], Kutoyants [Ku 04], or section 6.2 in Höpfner [Ho 14]. Let denote the law of under on the canonical path space for cadlag processes ([JS 87] Chapter VI), with canonical filtration

Write for the canonical process, i.e. the process of coordinate projections , , and for the -martingale part of . We introduce a -dimensional -martingale :

(9)

where denotes the -martingale part of . will be called score martingale. The angle bracket of under is the process

(10)

(with ranging over ) which is observable; will be called information process.

Lemma 2: For all pairs in , the laws are locally equivalent relative to . The log-likelihood ratio process of with respect to relative to is given by

(11)

where is the scalar product in . The information , , takes values in the set of all strictly positive definite symmetric matrices, -almost surely for every .

There are many possibilities to define estimators for the unknown parameter based on time-continuous observation of the diffusion path (5) on .

Proposition 1: We have well defined maximum likelihood (ML) estimators, of form

where , denotes common determinations

for the family of stochastic integrals under
for the family of stochastic integrals under

valid jointly under all . For every , the ML estimation error is such that

(12)

One might prefer to restrict observation of a null recurrent process to some fixed (and sufficiently large) compact set in , and thus consider estimators of the following type.

Proposition 2: Fix compact in such that the starting point of the diffusion is an interior point of . Define , , in analogy to (9), (10) and above, but with , replaced by , . Then for takes values in , -almost surely for every , and the estimator

admits a representation

(13)

for every . Under , this estimator replaces the log-likelihood surface considered in (11) by a modified (again ’inverse bowl-shaped’) surface


For the principal parameter we might think of an estimator which was optimal in the one-dimensional model (1) considered in section 2 of [HK 03].

Remark 1: Write for the -entry of in (10), for the first component of in (9), and for the first component of defined above. Consider

as an estimator for . In our model (5), this estimator is inconsistent: it admits a represention

under , with additional terms which by the ratio limit theorem

converge -almost surely as .

1.2 The proofs

Proof of lemma 1: The proof uses classical arguments, see e.g. Gihman and Skorohod [GS 72] or Khasminskii [Kh 80]; a resumé is in [Ho 14] sections 9.2–9.4. Fix , write for short for the drift in equation (5) under

and define functions , on by

(14)

(7) and (8) together yield

(15)

where , and we deduce from assumption 1 that

(16)

is a bijection onto . As a consequence, the diffusion (5) is recurrent under with invariant measure

(17)

by proposition 9.12 a) in [Ho 14].  

Remark 2: in (7) is a maximal open parameter interval such that diffusion (5) is null recurrent:
The explicit representation (15) of , valid with , shows that is integrable whenever (i.e. ): then in (14) converges to finite limits as , thus is transient by [GS 72], lemma 3 on p. 117.
In the limiting case () to transience, in (14), of logarithmic growth as , is a bijection onto . Thus all cases () lead to recurrence of with invariant measure , cf. proposition 9.12 a) in [Ho 14]. Ergodicity (positive recurrence) holds if the invariant measure is a finite measure, i.e. for ().

Proof of lemma 2: See [LS 78] and [JS 87] for local absolute continuity and structure of likelihood ratio processes, a short resumé is 6.9–6.12 in [Ho 14], whence the representation (11) for log-likelihood ratio processes. Assumption 1 implies that all are bounded functions, thus local -martingales defined by (9) are square-integrable martingales with angle bracket (10) under . It remains to prove that -almost surely, the process takes values in , the set of strictly positive definite symmetric matrices in . For with ,

(18)

where we write for short

for the row vector with entries

. Now -almost surely, the range of the diffusion path (5) has non-empty interior; on this (random, since defined by the path) open set, the functions are linearly independent by assumption 1. Hence -almost surely, the right hand side in (18) is strictly positive, first pointwise in and , second uniformly on by continuity, and we let .  

Proof of proposition 1: Common determinations of the stochastic integrals jointly for all

do exist since probability measures

, , are locally equivalent relative to , see lemma 8.2’ in [Ho 14]. Now we exploit almost sure invertibility of the information at time combined with ’inverse bowl shape’ of the log-likelihoods.  

Proof of proposition 2: By assumption, the starting point is an interior point of . Thus for , almost surely contains open balls, and (18) remains true with in place of .  

Proof for remark 1: corresponds to the maximum of the one-dimensional surface

The representation of estimation errors under follows from (5), (9) and the definition of in proposition 1. By Harris recurrence, the ratio limit theorem holds: for with ,

(19)

This is valid for all and for arbitrary choice of a starting point for the process (5).  

2 Convergence

We formulate a theorem on convergence of additive functionals and martingale additive functionals in the null recurrent diffusion (5). It combines theorem 3.1 from Höpfner and Löcherbach [HL 03] with results due to Khasminskii ([Kh 80], or theorem 2.2 in [KY 00] and theorem 1.1 in [Kh 01]). The approach is analoguous to [HK 03] or to examples 3.5 and 3.10 in [HL 03].

2.1 Convergence of martingales together with their angle bracket

Introducing further notation for , we define

(20)

(cf. (8), assumption 1 and lemma 1) and introduce a -valued index

(21)

together with weights

(22)

Note that (21) and (22) depend on whereas (20) depends on . We shall write for measurable functions whose components , , are such that

(23)

Functions which satisfy (23) do belong to for every , by lemma 1, and we can define

(24)

Theorem 1: Let with components , , satisfy (23). With notations (20)–(22) introduce sequences of norming constants by

(25)

Then for every , we have weak convergence in the Skorohod space of

(26)

as to

(27)

with notation (24). Here is a Mittag-Leffler process of index , the process inverse (i.e. the process of level crossing times) of a stable increasing process with index , and a -dimensional Brownian motion which is independent of ; thus is Brownian motion subject to independent time change .

The one-dimensional case of theorem 1 corresponds to theorem A in [HK 03] (note that the invariant measure there has a factor with respect to our in lemma 1).

Remark 3: The one-sided stable process with index has stationary independent increments with Laplace transform

Its process inverse has continuous nondecreasing paths with and , hence , are continuous processes. As a consequence ([JS 87], VI.2.3) of the continuous mapping theorem, theorem 1 implies weak convergence in of (26) evaluated at time to (27) evaluated at time .

Remark 4: is concentrated on . Whereas

admits finite moments of arbitrary order

, there is no finite moment of order : we have

(28)

which will be of importance below.

2.2 Proof of theorem 1

We formulate a lemma which corresponds to Khasminskii [Kh 80] (see theorem 2.2 in [KY 00] and theorem 1.1 in [Kh 01], or proposition 9.14 in [Ho 14]).

Lemma 3: The path of can be decomposed into iid life cycles , , defined by

(up to an initial segment ), with the function inverse of in (16) which depends on . For this decomposition the following holds:

i) for nonneative and measurable,

ii) as , with from (21) and from (20):

Proof: 1) The function in (14) is harmonic for the Markov generator of the diffusion . In a first step we consider transformed by : is a diffusion without drift

(with constant in (5), and is the function inverse of ) whose invariant measure is given by

(29)

([GS 72], [Kh 80], or proof of proposition 9.12 in [Ho 14]). Calculating from (15) and (16) the function and supressing the dependence on we get

with notation from (8) and (20), thus with notation (21)

This is the Lebesgue density of the invariant measure in (29) for the process .

2) We comment on the invariant density for . Multiplying the asymptotic constant by we define

(30)

and have

(31)

This is the norming used by Khasminskii for the invariant measure. By (8) and (21), the quotient

takes values in and is such that

3) Now we apply the results of [Kh 80] quoted in the beginning of section 2.2. The stopping times

define iid life cycles in the path of for , with the following two properties: first,

(32)

for measurable functions and as in (29); second,

(33)

4) The iid life cycles for in step 3) are iid life cycles for

for which asymptotics (33) remain unchanged, and change of variables transforms (32) into

Using step 2), the asymptotic constant on the right hand side of (33) equals

and lemma 3 is proved by step 3); recall that both and depend on .  

Proof of theorem 1: 1) In a first step, consider convergence of martingales in case .
Fix and apply theorem 3.1 c) of [HL 03] to one-dimensional measurable functions which belong to : to the sequence of renewal times considered in lemma 3 correspond norming sequences

such that

converges weakly as in the Skorohod space of one-dimensional cadlag functions to

where is given by

(34)

here we use (32) together with . Taking into account this factor arising on the right hand side of (34) we arrive at the norming sequence of theorem 1, defined by (25) and (22), and have weak convergence under

(35)

with as asserted in theorem 1. This proves convergence of martingales in case .

2) The result proved in step 1) can be extended to case and to weak convergence of martingales together with their angle bracket: apply corollaries 3.2 and 3.3 in [HL 03]. 

Proof for remark 3: We refer to 2.5-2.8 in [HL 03] and the references quoted there.  

Proof for remark 4: It is well known that is finite for and infinite for (e.g. [Ho 14], 6.18’). Since is the process inverse to , the relation

holds and proves that equals . 

3 LAMN and optimal estimator sequences

For local asymptotic mixed normality see Jeganathan [Je 82], Davies [Da 85], and LeCam and Yang [LY 90]. See also sections 5.1, 6.1 and 7 of [Ho 14].

3.1 Limit distribution for the estimators of section 1.1

Under consider the norming sequence defined by (25). In notations from (9) and (10), proposition 1 combined with theorem 1 yields convergence in law as of rescaled ML errors

(36)

for all , where is -dimensional Brownian motion independent of the Mittag-Leffler variable , and where is given by

(37)

with , ranging from to . If we fix as in proposition 2 a compact interval such that and replace , by ,