    # Estimating Diffusion With Compound Poisson Jumps Based On Self-normalized Residuals

This paper considers parametric estimation problem of the continuous part of a jump dif- fusion model. The threshold based method was previously proposed in various papers, which enables us to distinguish whether observed increments have jumps or not, and to estimate unknown parameters. However, a data-adapted and quantitative choice of the threshold parameter is a subtle and sensitive problem, and still remains as a tough problem. In this paper, we propose a new and simple alternative based on the Jarque-Bera normality test, which makes us to attain the above two things without any sensitive fine tuning. We show that under suitable conditions the proposed estimator has a consistency property. Some numerical experiments are conducted.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction

Suppose that we are given discrete-time but high-frequency observation from a solution to the one-dimensional diffusion with jumps described by

 (1.1) dXt=a(Xt,α)dwt+b(Xt,β)dt+c(Xt−)dJt,

where the ingredients are given as follows.

• is a standard Wiener process and a compound Poisson process associated with the Lévy measure

 ν(dz)=λF(dz)

for some probability distribution

. Throughout we assume that .

• The sampling times fulfills that

 (1.2) tnj=jhn,nh2n→0

where the terminal sampling time ; hereafter, we will largely abbreviate “” from the notation like and .

A well-known approach to estimate

 θ:=(α,β)∈Θα×Θβ=Θ

is the threshold based method independently proposed in , , and . In the method, we regard that the increment

 ΔjX:=Xtj−Xtj−1,

contains the jump component if for a fixed jump-detection threshold , and estimate after removing such increments. It is shown that for a good satisfying a suitable rate, the estimator of has asymptotic normality at the same rate as diffusion models. Hence the method asymptotically achieves both the estimation of and the jump detection in observed data, while finite-sample performance of the threshold method strongly depends on the value of . Unfortunately, a data-adaptive and quantitative choice of the threshold in the jump-detection filter is a subtle and sensitive problem, and still remains as an annoying problem in practice; see , , as well as the references therein. Such problem can also be seen in other jump detection methods such as .

The primary objective of this paper is to formulate an intuitively easy-to-understand strategy, which can simultaneously estimate and detect jumps without any precise calibration of a jump-detection threshold. For this purpose, we utilize the approximate self-normalized residuals  based on the Gaussian quasi maximum likelihood estimator (GQMLE), which makes a classical Jarque-Bera type test  adapted to our model. More specifically, the hypothesis test whose significance level is

is constructed by the following manner: let the null hypothesis be of “no jump component” :

 H0:ν(R)=0,

against the alternative hypothesis of “non-trivial jump component”:

 H1:ν(R)>0.

Then, if the Jarque-Bera type statistic based on the self-normalized statistics introduced later is larger than a given percentile of the chi-square distribution with the degrees of freedom being

, we reject the null hypothesis ; and otherwise, we accept . For such a test, we can intuitively regard that the largest increment contains at least one jump (and it will turn out to be true) when the null hypothesis is rejected. Following this inspection, our proposed method is to iteratively conduct the test with removing the largest increments at each test until is accepted, and after that, we estimate the target parameter without removed increments. Our method enables us not only just to make a “pre-cleaning” of diffusion-like data sequence by removing big fluctuations which collapse the (approximate) Gaussianity of the self-normalized residuals, but also to approximately quantify jumps relative to continuous fluctuations in a natural way.

The rest of this paper is organized as follows: in Section 2, we will give a briefly summary of the GQMLE, the approximate self-normalized residuals and the Jarque-Bera test for our model. Section 3 provides the specific recipe of ours and an alternative estimator to GQMLE in order to reduce computational load. At last, we will show some numerical experiments of our method.

## 2. Preliminaries

In this section, we briefly review the construction of GQMLE, self-normalized residual, and Jarque-Bera statistics with its theoretical behavior. Given any function on , we write

 fj−1(θ)=f(Xtj−1,θ).

We denote by the image measure of associated with the parameter value , and by the Poisson random measure associated with .

Suppose that the null hypothesis

is true for a moment; namely the underlying model is a diffusion process. Then, for the estimation of

, we can make use of the Gaussian quasi-(log-)likelihood

 Hn(θ):=n∑j=1log⎧⎪ ⎪⎨⎪ ⎪⎩1√a2j−1(α)hnϕ⎛⎜ ⎜⎝ΔjX−bj−1(β)hn√a2j−1(α)hn⎞⎟ ⎟⎠⎫⎪ ⎪⎬⎪ ⎪⎭,

where denotes the standard normal density and

 ϵj(θ)=ϵn,j(θ):=ΔjX−bj−1(β)hn√a2j−1(α)hn.

This quasi-likelihood is constructed based on the local-Gauss approximation of the transition probability by under , and lead to the Gaussian quasi-maximum likelihood estimator (GQMLE) defined by any element

 ^θn=(^αn,^βn)∈argmaxθ∈¯¯¯¯ΘHn(θ).

It is well known that the asymptotic normality holds true  under suitable regularity conditions:

 (√n(^αn−α0),√Tn(^βn−β0))L→N(0,diag(I−11(α0),I−12(β0))),

where

 I1(α0)=12∫(∂αa2a2(x,α0))⊗2π0(dx), I2(θ0)=∫(∂βba(x,β0))⊗2π0(dx).

Here denotes the invariant measure of .

To see whether a working model fits data well or not, diagnosis based on residual analysis is often done. Based on the GQMLE,  formulated Jarque-Bera normality test based on self-normalized residuals for our model. Define the self-normalized residual statistic by:

 ^Nj=^S−1/2n(ϵj(^θn)−¯^ϵn),

where and . Making use of , we define Jarque-Bera type statistic by

 JBn:=16n(n∑j=1(^Nj)3−3√hnn∑j=1∂xbj−1(^θn))2+124n(n∑j=1((^Nj)4−3))2.

Then, Jarque-bera normality test for our model is justified by the following sense:

###### Theorem 2.1.

(cf. [6, Theorem 3.1 and Theorem 4.1]) Under the suitable regularity conditions, we have the followings:

• Under , we have ;

• Under , we have .

## 3. Proposed strategy

For brevity we write

 Hn(θ)=n∑j=1ζn,j(θ)

Let be a small number. We propose the iterative jump detection procedure based on the Jarque-Bera type test below.

• Set , and let be empty set.

• Calculate the modified GQMLE (MGQMLE, for short) by:

 ^θkn∈argmax^Hkn(θ),

where . Define the following statistics:

 ¯^ϵkn:=1n−k∑j∉^Jknϵj(^θkn),^Skn:=1n−k∑j∉^Jkn(ϵj(^θkn)−¯^ϵkn)2.

Building on the MGQMLE and the above ingredients, (re-)construct the following modified self-normalized residuals and Jarque-Bera type statistics :

 ^Nkj :=(^Skn)−1/2(ϵj(^θkn)−¯^ϵkn)1j∉^Jkn, JBkn :=16(n−k)⎛⎜⎝∑j∉^Jkn(^Nkj)3−3√hn∑j∉^Jn∂xbj−1(^θkn)⎞⎟⎠2 +124(n−k)⎛⎜⎝∑j∉^Jkn((^Nkj)4−3)⎞⎟⎠2.
• If , then pick out the interval number

 j(k+1):=argmaxj∈{1,…,n}∖^Jkn|ΔjX|

add to , and return to Step 1; otherwise, set the number of jumps , and go to Step 3.

• If , regard that there is no jump; otherwise, the detected jumps are (they are in descending order). Finally, set as the estimator of .

###### Remark 3.1.

By using its intensity parameter , the number of jumps of a compound Poisson process is expressed as . Thus, as the terminal time gets larger and larger, the iteration number of our proposed methodology should also be large. In such case or the case where seemingly several jumps do exist, we could instead start from -th stage for some which conveniently enables us to “skip” first some redundant stages.

###### Remark 3.2.

In practice, the size of “last-removed” increment would be used as the threshold for detecting jumps for future observations: with the value in hand, for future observations we regard that a jump occurred over if

 |ΔjY|>rn(k).
###### Remark 3.3.

Our method enables us to divide the set of the whole increments into the following two categories:

• “One-jump” group , and

• “No-jump” group .

Our method conducts the estimation of the drift and diffusion part of based on continuously joined up data computed from the no-jump group pairs:

Also, we may estimate the jump part by the members of one-jump group; namely we think that the sequence under being i.i.d. with common jump distribution of the compound Poisson process .

###### Remark 3.4.

To reduce the computational load of the calculating the GQMLE, one can alternatively use the stepwise estimator defined by:

 ~αn∈argmaxα∈¯Θαn∑j=1log⎧⎪ ⎪⎨⎪ ⎪⎩1√a2j−1(α)hnϕ⎛⎜ ⎜⎝ΔjX√a2j−1(α)hn⎞⎟ ⎟⎠⎫⎪ ⎪⎬⎪ ⎪⎭, ~βn∈argmaxβ∈¯Θβn∑j=1log⎧⎪ ⎪⎨⎪ ⎪⎩1√a2j−1(~αn)hnϕ⎛⎜ ⎜⎝ΔjX−bj−1(β)hn√a2j−1(~αn)hn⎞⎟ ⎟⎠⎫⎪ ⎪⎬⎪ ⎪⎭,

and its modified version can similarly be defined. Under the null hypothesis being true, the limit distribution of is shown to be equivalent to that of (cf. ). Moreover, computation of the GQMLE and MGQMLE may become much less time-consuming one when the coefficients are of certain tractable forms: let and be the dimension of and , respectively, and suppose that the diffusion coefficient and the drift function can be written by suitable functions and as

 a(x,α)= ⎷pα∑l=1α(l)a(l)(x),b(x,β)=pβ∑k=1β(k)b(k)(x).

where denotes the -th element of

for every vector

. Then the stepwise estimator is given by

 ~βn=1hn(n∑j=1a−2j−1(~αn)Bj−1B⊤j−1)−1n∑j=1a−2j−1(~αn)ΔjXBj−1,

where and . What is important from these expressions is that the modified version of can be calculated simply by removing the corresponding indices from the sum without repetitive numerical optimizations, thus reducing the computational time to a large extent.

## 4. Asymptotic property of the MGQMLE

In this section, we look at the asymptotic properties of the MGQMLE for the following toy model:

 (4.1) dXt=βdt+√αdwt+dJt,

where is a compound Poisson process expressed as

 Jt:=Nt∑i=1ξi.

In this expression, and denote a Poisson process whose intensity parameter is

and i.i.d random variables, respectively. Recall that the observations

are obtained in , . To deduce the asymptotic properties of the MGQMLE, we introduce some assumptions below.

###### Assumption 4.1.
• , and there exists positive deterministic sequence satisfying the conditions

 max1≤j≤[λ+1]Tn|ξj|=Op(an), a3n√hnlogn∨nh2na2nlogn=o(1).
• For any , we have

 P(|ξ1|2≤hnlogn)=o(Tn), P(|ξ1|4≤Ka3n√hnlogn)=o(Tn).
• the number of jump removal .

The following theorem ensures a consistency property of the MGQMLE:

###### Theorem 4.2.

If Assumption 4.1 holds, then we have

 P({|^θknn−θ0|>ϵ}∩{JBknn≤χ2q(2)})→0

for each and .

## 5. Numerical experiments

We consider the following SDE model:

 dXt=α(1+X2t)−1/2dwt−βdt+dNt,

where . Here we set the true values as and . Under the conditions where:

• ;

• ;

• ;

• , and selected with equal probabilities.

Here, we set number of jumps fixed just for numerical comparison purpose. Then the performance of our method is given in the table 1.

The transition of our estimators and the logarithmic values of the JB statistic in the last iteration are shown in Figures 3-3. As can be seen from Table 1

, the estimation accuracy is drastically improved by our method. In this example, we set the jump distribution symmetric, thus the improvement of the estimation of the drift parameter is small compared with that of the diffusion parameter; the amount of improvement is expected to be much more significant when the jump distribution is skewed.

### Acknowledgement

This work was supported by JST, CREST Grant Number JPMJCR14D7, Japan.

## 6. Appendix: proof of Theorem 4.2

Let Assumption 4.1 hold throughout this section. First, we prove two lemmas.

###### Lemma 6.1.

Let denote jump times of . Then we have

 P(∃i,j  s.t.  τi,τi+1∈[tj−1,tj))→0,n→∞.
###### Proof.

Since the increments of the jump times of Poisson process independently obey the exponential distribution with mean

, it follows that

 P(∃i,j  s.t.  τi,τi+1∈[tj−1,tj)) ≤∞∑i=2P(∃j∈{2,…,i}  s.t.  τj−τj−1

For convenience, we hereafter write

 Bkn,n ={∃i,j  s.t.  τi,τi+1∈[tj−1,tj)}c.

Thanks to Lemma 6.1, in proving Theorem 4.2 we may and do focus on the set .

We have

###### Proof.

Hereafter we use the following notations:

 Dn ={j:∃τi∈[tj−1,tj)}, Cn ={1,…,n}∖Dn.

For every , we write . Then we have

 P({∃i,j  s.t.  τi∈[tj−1,tj)  and  ΔjX∉^Jknn}∩{1≤NTn≤kn}∩Bkn,n) ≤P({∃j′∈Dn,j′′∈Cn  s.t.  |Δj′X|<|Δj′′X|}∩{1≤NTn≤kn}∩Bkn,n) ≤P({∃j′∈Dn,j′′∈Cn  s.t.  |Δj′J|−|β0hn+α0Δj′w|<|β0hn+α0Δj′′w|} ∩{1≤NTn≤kn}∩Bkn,n) ≤P({min1≤j≤n(|ΔjJ|2∨0)hn<4β20hn+4α20max1≤j≤n|ηj|2}∩{1≤NTn≤kn}∩Bkn,n).

From extreme value theory (cf. [2, Table 3.4.4]), we have

 (6.1) max1≤j≤n|ηi|2−(logn−12loglogn−logΓ(12))=Op(1).

Therefore it suffices to show that

 P⎛⎜ ⎜⎝⎧⎪ ⎪⎨⎪ ⎪⎩min1≤j≤n(|ΔjJ|2∨0)hn(logn−12loglogn−logΓ(12))<1+rn⎫⎪ ⎪⎬⎪ ⎪⎭∩{1≤NTn≤kn}∩Bkn,n⎞⎟ ⎟⎠→0,

where

 rn:=4β20hn+4α20{max1≤j≤n|ηj|2−(logn−12loglogn−logΓ(12))}logn−12loglogn−logΓ(12)=op(1).

Assumption 4.1 implies that

 P({∃τi∈[tj−1,tj)  s.t.  ΔjX∉^Jknn}∩{1≤NTn≤kn}∩Bkn,n) ≤P⎛⎜ ⎜⎝⎧⎪ ⎪⎨⎪ ⎪⎩min1≤j≤n(|ΔjJ|2∨0)hn(logn−12loglogn−logΓ(12))<32⎫⎪ ⎪⎬⎪ ⎪⎭∩{1≤NTn≤kn}∩Bkn,n⎞⎟ ⎟⎠+P(|rn|≥12) ≤P(|ξ1|2<32hn(logn−12loglogn−logΓ(12)))kn∑j=1(λTn)j(j−1)!e−λTn+P(|rn|≥12)→0,

hence the claim follows. ∎

Lemma 6.2 implies that all increments containing jumps are correctly picked up as long as . Similarly, we can derive

 P({∃j∈^Jknn∩Cn}∩{NTn≥kn+1})→0.

Proof of Theorem 4.2

We introduce the following events:

 Akn,n,ϵ: ={|^θknn−θ0|>ϵ}∩{JBknn≤χ2q(2)}}, Ckn,n: ={∃i,j  s.t.  τi∈[tj−1,tj)  and  ΔjX∉^Jknn}c ={∀i,j,  τi∉[tj−1,tj)  or  ΔjX∈^Jknn}.

Taking the lemmas into consideration, we can split as

 P(Akn,n,ϵ)=P1,n+P2,n+P3,n+o(1),

where

 P1,n :=P(Akn,n,ϵ∩{NTn=0}), P2,n :=P(Akn,n,ϵ∩Bkn,n∩Ckn,n∩{1≤NTn≤kn}), P3,n :=P(Akn,n,ϵ∩Bkn,n∩{∃j∈^Jknn∩Cn}c∩{NTn≥kn+1}).

Since , it suffices to show and . From now on, for an event we denote by the indicator function of :

 1A=1A(ω):={1ω∈A,0ω∉A.

First we focus on the estimate of . By virtue of the foregoing discussion, we have the following expression:

 ^αknn1Akn,n,ϵ∩Bkn,n∩Ckn,n∩{1≤NTn≤kn} =⎛⎜⎝1(n−kn)hn∑j∉^Jknn(ΔjX)2⎞⎟⎠1Akn,n,ϵ∩Bkn,n∩Ckn,n∩{1≤NTn≤kn} =⎛⎜⎝1(n−kn)hn∑j∉^Jknn(β0hn+α0Δjw)2⎞⎟⎠1Akn,n,ϵ∩Bkn,n∩Ckn,n∩{1≤NTn≤kn}, ^βknn1Akn,n,ϵ∩Bkn,n∩Ckn,n∩{1≤NTn≤kn} =⎛⎜⎝1(n−kn)hn∑j∉^JknnΔjX⎞⎟⎠1Akn,n,ϵ∩Bkn,n∩Ckn,n∩{1≤NTn≤kn} =⎛⎜⎝1(n−kn)hn∑j∉^Jknn(β0hn+α0Δjw)⎞⎟⎠1Akn,n,ϵ∩Bkn,n∩Ckn,n∩{1≤NTn≤kn}.

Hence it follows that

 P2,n≤P⎛⎜⎝∣∣ ∣ ∣∣1(n−kn)hn∑j∉^Jknn(β0hn+α0Δjw)2−α0∣∣ ∣ ∣∣∨∣∣ ∣ ∣∣1(n−kn)hn∑j∉^Jknn(β0hn+α0Δjw)−β0∣∣ ∣ ∣∣>ϵ⎞⎟⎠.

The law of large numbers for triangular sequences implies that

 1(n−kn)hnn∑j=1(β0hn+α0Δjw)2p→α0, 1(n−kn)hnn∑j=1(β0hn+α0Δjw)p→β0.

Again applying (6.1), we have

 1(n−kn)hn∑j∈^Jknn(β0hn+α0Δjw)2 ≤knn−kn(2β20hn+2α20max1≤j≤nη2j)=op(1), ∣∣ ∣ ∣∣1(n−kn)hn∑j∈^Jknn(β0hn+α0Δjw)∣∣ ∣ ∣∣ ≲knn+knn√hnmax1≤j≤n|ηj|=op(1).

Thus .

Let us now move on to the estimates of . From the representation

and the central limit theorem, it follows that

 NTn−λTn√λTn→N(0,1).

Hence if , we have

 P3,n≤P(N