Parameter estimation for one-sided heavy-tailed distributions

05/07/2020
by   Phillip Kerger, et al.
Fordham University
0

Stable subordinators, and more general subordinators possessing power law probability tails, have been widely used in the context of subdiffusions, where particles get trapped or immobile in a number of time periods, called constant periods. The lengths of the constant periods follow a one-sided distribution which involves a parameter between 0 and 1 and whose first moment does not exist. This paper constructs an estimator for the parameter, applying the method of moments to the number of observed constant periods in a fixed time interval. The resulting estimator is asymptotically unbiased and consistent, and it is well-suited for situations where multiple observations of the same subdiffusion process are available. We present supporting numerical examples and an application to market price data for a low-volume stock.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/10/2020

A note on estimation of α-stable CARMA processes sampled at low frequencies

In this paper, we investigate estimators for symmetric α-stable CARMA pr...
06/06/2021

Tempered Stable Autoregressive Models

In this article, we introduce and study a one sided tempered stable auto...
01/04/2018

String Periods in the Order-Preserving Model

The order-preserving model (op-model, in short) was introduced quite rec...
03/30/2018

Log-moment estimators for the generalized Linnik and Mittag-Leffler distributions with applications to financial modeling

We propose formal estimation procedures for the parameters of the genera...
03/22/2022

Arithmetic crosscorrelation of pseudorandom binary sequences of coprime periods

The (classical) crosscorrelation is an important measure of pseudorandom...
09/26/2019

Moment based estimation for the multivariate COGARCH(1,1) process

For the multivariate COGARCH process, we obtain explicit expressions for...
05/11/2019

Data description and retrieval using periods represented by uncertain time intervals

Time periods are frequently used to specify time in metadata and retriev...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A wide variety of phenomena whose states change over time at random exhibit a number of time periods in which the states remain unchanged. These range across stock market data, diffusion of large molecules within cells, movement of bacteria, electricity markets, and more; see e.g. Chapter 1 of [29]. For example, in finance, although the celebrated Black–Scholes–Merton model has been widely used to describe fluctuations of the prices of financial products, the model uses a geometric Brownian motion, which fails to capture the constant-price periods that many low-volume equities experience. This motivates the construction of time-changed stochastic processes exhibiting such constant periods. As a simple example, consider a Brownian motion composed with the inverse of an independent -stable subordinator (or inverse -stable subordinator for short). The resulting stochastic process , called a time-changed Brownian motion

, shows constant periods and has variance growing at the rate of

. Since the stability index takes a value in the interval and the variance of the Brownian motion grows at the rate of , particles represented by the time-changed Brownian motion spread at a slower rate than the regular Brownian particles for . Because of this, the particular time-changed Brownian motion and its variants are often referred to as subdiffusion processes. Figure 2 gives a graphical comparison of sample paths of the inverse -stable subordinator and the time-changed Brownian motion with different values of . As the simulations show, we expect to observe longer constant periods for a smaller value of . The time-changed Brownian motion is not a Markov process and its densities satisfy the time-fractional heat equation where denotes the Caputo fractional derivative of order (see [22, 23]). For more details about properties of and its various extensions (including stochastic differential equations) as well as their continuous-time random walk counterparts, see e.g. [24, 29] and references therein.

Figure 1: Sample paths of the inverse -stable subordinator (black) and the time-changed Brownian motion (red) with (left) and (right)
Figure 2: Sample paths of a -stable subordinator (black) and its inverse (blue).

Various methods for estimating the stability index of a stable distribution have been presented in the literature; see e.g. [1, 2, 5, 8, 9, 10, 11, 18, 21, 25]. For example, Hill’s estimator in [10] is popular and well-known for its robustness; it only assumes the underlying distribution has power law tails. In [1], the authors proposed a shift-invariant version of Hill’s estimator to increase the robustness. On the other hand, in [2], they revealed a connection between Hill’s estimator and a least-squares estimator obtained from a - transformation of the power law tails. The maximum likelihood estimator (MLE) was established in [25], overcoming the difficulty that the stable density does not have a closed form. The MLE in general requires a particular distributional form, so it is not as robust as Hill’s estimator; however, it uses all of the data, rather than only the largest order statistics based on which Hill’s estimator and its variants are constructed. In [8], another estimator using all of the data was constructed via the analysis of the -wright function together with the method of moments.

In [12]

, the authors compared six known parameter estimation methods in the context of subdiffusions, where the lengths of the constant periods follow a one-sided (totally right-skewed)

-stable distribution. This is the situation we also consider in this paper. However, we propose a new estimator for the stability index based on the recent development of numerical approximations of subdiffusion processes in [13, 14, 16, 17]. Our idea is to regard real data exhibiting constant periods as a numerically approximated path of some time-changed process and to use the method of moments to estimate . Note, however, that the method of moments does not apply to the stable distribution itself in the usual sense since it does not have the first moment. We instead apply the method to the “number” (rather than the “lengths”) of observed constant periods. Cahoy’s estimator in [8] is based on the logarithmic moments of the stable distribution and has a connection to the estimator to be proposed in this paper; see Remark 3.

Our estimator is especially suited for use on data sets with many observations over a long period of time (i.e. when both the number of observed paths and the time horizon are large) since it is asymptotically unbiased and consistent; see Remark 5(a). Moreover, it is robust in the sense that it only requires the underlying subordinator to have power law probability tails; see Remark 6. One major advantage of using our estimator as opposed to some other well-known estimators is that it allows one to estimate the entire range of values with reasonable accuracy and precision; see Remark 7(a).

2 Stable subordinators and their inverses

Throughout the paper, all stochastic processes are assumed to be defined on a probability space and denotes the expectation under . A stochastic process having independent and stationary increments is called a Lévy process. For the purpose of this paper, let us consider a one-dimensional Lévy process with increasing, càdlàg paths (right continuous paths with left limits) starting at , which is usually called a subordinator. The distribution of a subordinator is characterized by its Laplace transform

(1)

where , called the Laplace exponent of , is a Bernstein function on with . A -stable subordinator is a subordinator with for , where is a parameter called the (stability) index.

The probability tails of the -stable subordinator are very different from those of a Brownian motion . Namely, for , while shows the exponential decay

exhibits the slower, polynomial decay

where is the Gamma function (see p.37 and p.76 of [4]). Due to the presence of the heavy tail, does not have the th moment for , so the method of moments is not applicable in the usual sense. The stable subordinator has strictly increasing paths that go to as with infinitely many jumps, where the jump times are dense in (see Theorem 21.3 of [28]).

Define the inverse or first hitting time process of a -stable subordinator by

for . The process is called an inverse -stable subordinator. Since has strictly increasing paths starting at 0, has continuous, nondecreasing paths starting at 0. Figure 2 illustrates the inverse relationship between the sample paths of and . The jumps of correspond to the constant periods of , and the inverse relation implies

Moreover, unlike

, the random variable

possesses finite exponential moments, i.e.  for any (see e.g. [14, 15]). In particular, the following formula holds for the th moment of for any (see e.g. Proposition 5.6 of [29] or Example 3.1 of [14]) and plays a key role in deriving an estimator for the index :

(2)

3 Derivation of an Estimator and its Properties

To derive an estimator for the index of an inverse stable subordinator , we briefly discuss below the approximation scheme of first presented in [16, 17] and subsequently used in [13, 14].

Fix an equidistant step size and a time horizon . Simulate a sample path of the stable subordinator of index , which has independent and stationary increments, by setting and then following the rule for where ’s are i.i.d. random variables having the same distribution as (which can be generated via an algorithm presented in [30]). Stop this procedure upon finding the integer satisfying For each path of , the number exists as a finite number since as . Note that is an -valued random variable depending on the initially chosen constants and .

Next, let

for . The sample paths of are nondecreasing step functions with constant jump size and the length of the th constant time interval given by . Indeed, whenever . In particular, each path of has a total of constant periods, and a.s.,

(3)

Moreover, it has been derived in [14, 17] that a.s.,

(4)

Therefore, a.s., converges to uniformly on the time interval as . Moreover, This together with (3) and (2) with gives

(5)

where and

(6)

Note that even though and do not coincide, they are asymptotically equivalent as ; i.e. for fixed and ,

(7)

Based on this observation, we define a method-of-moments-like estimator for the parameter implicitly as a solution to the equation

(8)

where is the sample mean of a random sample of size from the distribution of the random variable . This is not the method of moments in the usual sense as we do not have the equality for a fixed time horizon ; however, the relation (7) makes the definition of our estimator reasonable when is sufficiently large. In the remainder of this section, we discuss how large should be as well as the properties of the estimator . First, as the following proposition shows, satisfying (8) uniquely exists as long as and , where is the Euler–Mascheroni constant.

Proposition 1.

Let . If , then the function in (6) is a smooth, strictly increasing bijection from to . Moreover, if , then is convex on .

Proof.

The smoothness of the function follows by the smoothness of the Gamma function. Suppose and observe that

(9)

where is the digamma function defined by for . The digamma function and its derivative have the series representations and (see 6.3.16 and 6.4.10 in [3]). Since is strictly increasing, for any . Combining this with (9) yields for all . Since and , it follows that is a strictly increasing bijection from to .

Now, suppose . To prove the convexity of , observe that

Thus, the convexity follows upon verifying that for all . Since is increasing and is decreasing,

The latter is positive since , which completes the proof. ∎

In the remainder of the section, assume that . We have so far defined the method-of-moments-like estimator only conditionally on the event that . To define on the entire probability space, we formally set if and if . As a result, can be expressed as

(10)

This allows to take 0 and 1 even though the true parameter cannot. However, in practical situations where is very large, this will not be a serious issue since, as the following theorem shows, the probability can be made as small as we want by taking large enough.

Theorem 2.

Let . Let be the sample mean of a random sample of size from the distribution of satisfying (3). Then as ,

(11)

Moreover, there exists a constant not depending on , or such that for any ,

(12)
Proof.

If for all , then , so by (3), (4) and the inverse relation between and ,

The latter clearly decays to 0 as , and the exact rate of decay follows from the asymptotic behavior of the tail probability of given in Section 2, thereby yielding (11).

On the other hand, by a similar argument together with the fact that has a density (and hence, ),

By Markov’s inequality and relation (1) with , for any fixed , Taking so that gives (12). ∎

Remark 3.

The construction of our estimator for the parameter is completely different from those of the existing estimators discussed in Section 1, except for the one proposed in [8]. Namely, we analyze the distribution of , which represents the “number” of the observed constant periods minus 1, rather than the distribution of the “lengths” of the constant periods. On the other hand, in [8], the author derived an estimator using the logarithmic moment of the stable random variable together with the equality in distribution of and . (The latter equality is expressed as in that paper.) The estimator for is given for each fixed by , where denotes the sample mean corresponding to the theoretical mean . We can observe a connection of Cahoy’s estimator at with our method-of-moments-like estimator. Indeed, is calculated from observed values of the random variable , which can be approximated by with small; thus, is connected with the random variable arising in the method-of-moments-like estimation.

The following theorem concerns asymptotic properties of the method-of-moments-like estimator for large sample size (i.e. when many observations of the same subdiffusion process are available as in the real-life example given in Section 5). In particular, part (c) allows us to determine how large should be in order to achieve a target level for the asymptotic variance of , which is important in applications. Recall that the Gamma function is convex on and attains the minimum at .

Theorem 4.

Suppose and . Let be an i.i.d. sequence from the distribution of satisfying (3). For each , let be the estimator defined in (10) with .

  1. For any fixed , a.s.

  2. a.s.

  3. For any fixed ,

    (13)
Proof.

(a) Suppose . Note that since , whereas due to (5) and Proposition 1. Thus, . By the strong law, as a.s., and since is continuous,

(14)

If , then . If , then since is increasing. In both cases, a.s.

(b) By (a), it suffices to show that . Note that as due to (5). Note also that

Take large enough so that and . Then by (5), it follows that

Clearly, is a strictly increasing bijection from to with inverse , and for all . Thus,

(15)

from which the inequality follows, as desired.

(c) By (3) and (2) with ,

(16)

In particular,

are i.i.d. with finite second moment, so by the central limit theorem, as

where is the theoretical variance of . We wish to apply the delta method to , but since does not exist and any integer-order derivative of vanishes on , the delta method fails if , even using higher order Taylor approximations.

To apply the delta method without any technical issues, we smooth out the function at as follows:

where with small is a smooth, strictly increasing, convex function such that with and denoting the derivatives from the left and right, respectively. Then is a smooth, strictly increasing function on , and hence, exists and is positive regardless of the value of . Therefore, for the modified estimator defined by by the delta method, as , and in particular,

(17)

Moreover, since is convex on and concave on when due to Proposition 1 (as the convexity of implies the concavity of ), it follows that

where the last equality follows from (9). Combining this with (16) gives

(18)

Finally, note that takes values in and that , where is the indicator function of a set . This implies for each . Putting the latter together with (17) and (18) gives (13). ∎

Remark 5.

(a) (Asymptotic unbiasedness and consistency) Theorem 4(b) shows that , when regarded as an estimator indexed by both and , is asymptotically unbiased and consistent as the indices go to infinity.

(b) By (14) and (15), for large and for each path, the estimation error with sufficiently large satisfies

(19)
Remark 6.

(Robustness) Our estimation method can be applied to a more general subordinator whose Laplace exponent in (1) has the asymptotic behavior

(20)

(The special case when for all recovers a -stable subordinator.) Indeed, in that case, the Laplace transform of the function satisfies as (see e.g. Proposition 3.1 in [14]), and hence, by the Tauberian theorem (see e.g. Section 1.7 of [6]), as . In other words, equality (2) with approximately holds for large enough . Hence, when is large, it is reasonable to use the estimator defined in (10) to estimate the value of .

Note that the asymptotic condition (20) means the subordinator has power law probability tails. Indeed, by the proof of Lemma 3.4 in [14], for each fixed , which is asymptotically equivalent to as if and only if condition (20) holds, but by the Tauberian theorem, the latter is equivalent to the statement that as .

4 Numerical Comparison to Existing Methods

In this section, we use simulations in Matlab to compare the performance of the method-of-moments-like estimator defined in (10) (MOM-like estimator for short) to Cahoy’s estimator in [8], Hill’s estimator in [10], and the Meerschaert–Scheffler estimator in [21] (MS estimator for short). We chose the latter three estimators for comparison purposes since i) they are simple to apply, ii) Cahoy’s estimator is also constructed via the method of moments, iii) Hill’s estimator has been widely employed in practice, and iv) both Cahoy’s and MS estimators rely on all the data like the MOM-like estimator. We calculate Hill’s estimator based on the largest 10% of the data as that is common in practice.

Note that this section focuses on data that are realizations of the discretized inverse -stable subordinator but not on data coming from a time-changed process of the form . This is because under some assumptions on the outer process , a procedure for estimating for data following is indeed the same as the procedure for estimating for data following . We postpone a detailed discussion of this matter to Section 5, where we treat real data observable by means of a time-changed process.

We first generate paths of the discretized time change on a fixed time interval , calculate the sample mean for the observed paths, and obtain the MOM-like estimate via equation (10), where is determined as the number of constant periods minus 1 for the th path. On the other hand, as mentioned in Remark 3, Cahoy’s estimator requires realizations of the time change ; however, only the paths of the discretized time change are available here. Therefore, instead of appearing in Remark 3, we use to obtain an approximate version of Cahoy’s estimate. Using the same data set but after aggregating all the constant periods observed in the paths, we calculate Hill’s and MS estimates based on the “lengths” (rather than the “number”) of the constant periods via the formulas provided in [12].

Note that our simulation relies on the choice of three hyper-parameters — the step size , the final time , and the number of observed paths. So we simulate using various combinations of the hyper-parameters over different values of . The results in three particular cases with fixed to be 1 are summarized in the tables in Figure 3, which indicates that the MOM-like estimator i) improves its performance as and increase regardless of the value of , ii) is more accurate than Hill’s and MS estimators for large values of , and iii) performs better than the approximate version of Cahoy’s estimator for small values of .

, ,
MOM Cahoy Hill MS
0.1 0.1176 0.1470 0.0965 0.1034
0.2 0.2070 0.2233 0.1921 0.2059
0.3 0.3011 0.3105 0.2994 0.3265
0.4 0.4001 0.4021 0.3926 0.4262
0.5 0.5001 0.4999 0.4977 0.5369
0.6 0.5997 0.5998 0.6100 0.7130
0.7 0.6995 0.7012 0.7348 0.7215
0.8 0.7999 0.8002 0.9000 0.8963
0.9 0.8999 0.9001 1.1988 0.9586
, ,
MOM Cahoy Hill MS
0.1 0.1263 0.1574 0.1071 0.1306
0.2 0.2103 0.2313 0.1803 0.2215
0.3 0.3031 0.3137 0.2839 0.3142
0.4 0.4020 0.4074 0.3940 0.3806
0.5 0.5051 0.5118 0.4915 0.5585
0.6 0.6037 0.5987 0.6136 0.6236
0.7 0.6987 0.7024 0.7418 0.8323
0.8 0.8004 0.8034 0.8950 0.8163
0.9 0.9060 0.9056 1.2084 0.9797
, ,
MOM Cahoy Hill MS
0.1 0.1346 0.1791 0.0852 0.0839
0.2 0.2141 0.2409 0.1989 0.1829
0.3 0.3105 0.3338 0.2956 0.3542
0.4 0.3894 0.3898 0.4104 0.4591
0.5 0.5186 0.5140 0.5137 0.6280
0.6 0.5749 0.5759 0.5714 0.7811
0.7 0.6932 0.6880 0.7094 0.7411
0.8 0.7921 0.7965 0.8732 0.8927
0.9 0.8911 0.8940 1.1757 0.9452
Figure 3: Comparison of MOM-like estimates with Cahoy’s, Hill’s and MS estimates.
MOM Cahoy Hill MS
mean variance mean variance mean variance mean variance
0.1 0.1160 0.0001 0.1431 0.0001 0.1071 0.0005 0.1133 0.0004
0.2 0.2045 0.0002 0.2189 0.0003 0.1991 0.0011 0.2128 0.0012
0.3 0.3016 0.0002 0.3091 0.0002 0.2935 0.0008 0.3097 0.0018
0.4 0.4011 0.0001 0.4059 0.0002 0.3968 0.0004 0.4126 0.0031
0.5 0.5003 0.0001 0.5035 0.0002 0.4988 0.0003 0.5433 0.0035
0.6 0.5990 0.0001 0.6018 0.0001 0.6095 0.0003 0.6322 0.0038
0.7 0.6998 0.0001 0.7006 0.0002 0.7376 0.0001 0.7656 0.0056
0.8 0.8002 0.0000 0.8013 0.0001 0.8992 0.0001 0.8919 0.0066
0.9 0.8992 0.0000 0.8987 0.0001 1.1981 0.0001 1.0321 0.0096
Figure 4: Comparison of the sample means and sample variances of MOM-like estimates with those of Cahoy’s, Hill’s and MS estimates based on 100 repetitions of estimation, where , and .

Next, with still fixed, we take and , which provide the setting for a real data to be treated in Section 5, and repeat the above estimation procedure 100 times to produce 100 estimates based on each of the four different methods. We then calculate the sample mean and sample variance for each method. The results summarized in Figure 4 show that the MOM-like estimator gives a reasonably accurate mean with a very low variance for the entire range of values.

We now turn our attention to the effect of the value of on the MOM-like estimation. It is natural to expect that the accuracy of the MOM-like estimation increases as decreases, which is also indicated by the upper bound for the pathwise estimation error in (19). Here, we again record the sample means of estimates based on 100 repetitions of the MOM-like estimation with and fixed but this time with different values of . Figure 5 gives the simulation results with chosen from the four values , where the corresponding sample variances are all within 0.00025. It shows that when the true value is small, the MOM-like estimator performs better with a smaller value of ; however, for large , choosing a smaller value of from the particular four values contributes little to improving the estimation results. Unfortunately, we do not have a straightforward criterion for choosing the value of ; it depends on the accuracy level that one wants to achieve, and generally speaking, a small value is recommended when the data exhibits long constant periods (which implies the true value is small). On the other hand, we suggest taking with for any possible so that the quantity appearing in the fundamental estimates in (5) is guaranteed positive and hence provides a meaningful lower bound for . For the latter purpose, taking suffices since we always assume . In Section 5, where we deal with real data with and , we take since Figures 4 and 5 guarantee satisfactory performance of the MOM-like estimator for these values of and with .

MOM, ,
0.1 0.1006 0.1115 0.1218 0.1285
0.2 0.1988 0.2042 0.2064 0.2108
0.3 0.3008 0.2995 0.3035 0.3029
0.4 0.3997 0.3972 0.4001 0.4027
0.5 0.4990 0.5008 0.5011 0.4992
0.6 0.5997 0.6001 0.5992 0.6004
0.7 0.7005 0.6998 0.7003 0.6977
0.8 0.7987 0.7993 0.8001 0.7996
0.9 0.8996 0.9004 0.9002 0.8988
Figure 5: Comparison of the sample means of MOM-like estimates based on 100 repetitions of estimation with different values of .
Remark 7.

(a) The above simulations may seem to suggest that, for realizations of , the MOM-like estimator outperforms the approximate version of Cahoy’s estimator when the true is small. However, Cahoy’s estimator has the following advantages: i) it does not involve a root finding algorithm and ii) it is valid for any fixed

and comes with explicit formulas for confidence intervals (whereas the MOM-like estimator requires equation (

8) to be solved with a certain algorithm and can only provide approximate confidence intervals for large via the normal approximation depending on small in the proof of Theorem 4(c)). On the other hand, when a decent computing environment is available and is very large as is often the case with real data, the MOM-like estimator may become a suitable option as it allows one to estimate the entire range of values with reasonable accuracy and precision.

(b) In [27]

, the authors introduced a modified version of the cumulative distribution function (CDF) for the lengths of constant periods. The modification accounts for the fact that the beginning and ending of a given constant period in real data may have actually occurred at time points when the data was not recorded. The latter is an important issue when parameter estimation is carried out based on the “lengths” of constant periods, while it becomes less of an issue with the MOM-like or Cahoy’s estimation since the “number” of constant periods is not affected by the exact timing of the beginning and ending of each constant period. Being able to avoid a discussion of this subtle issue as well as the simple formula for finding the estimate is an advantage of using the MOM-like or Cahoy’s estimation. In the above discussion, we did not compare the modified CDF method to the other four estimation methods since (i) it requires much more computing power and (ii) it considers the setting for real data observed at pre-specified discrete time points (while we used simulated paths of

in which the beginning and ending of each constant period may occur at any time point in ).

5 Real-Life Application: Low-Volume Stock Modeling

This section illustrates how to apply the MOM-like estimator to real data collected from a stock market. Traditionally, low-volume stocks have been difficult to model since they often feature periods in which no trades are made. As a result, the price often has long constant periods throughout the trading day, and therefore, it is a good candidate to model using a time-changed process . However, since an observed data is always discrete, we regard it as a realization of the discretized process . In terms of the discretized time change , we assume that the underlying subordinator has power law probability tails with index even though the use of an exponentially tempered power law might be more appropriate, as pointed out e.g. in [20, 26]. (Note that our purpose here is simply to illustrate how to apply our estimation method to given data exhibiting constant periods.) On the other hand, is assumed to be a geometric Brownian motion with representation , where is a Brownian motion independent of the subordinator and the constants and are additional parameters to be estimated. We focus on the logarithmic stock price , where , and note that the independence assumption implies that is independent of .

Figure 6: Fluctuation of the logarithmic value of the WSTL price on November 8, 2019.
Figure 7: A simulated path of on with based on the obtained estimates , and .

Here, we use second-by-second trading data of the low-volume NASDAQ stock with ticker WSTL for the nine weeks starting from September 30, 2019 (consisting of trading days), obtained from Bloomberg Terminal. Figure 7 provides the observed path of the logarithmic price on a day, which clearly exhibits some constant periods. We assume that each of the observed paths is a realization of the discretized process with on the time interval , with being the number of seconds in a trading day. Recall from Figures 4 and 5 that the MOM-like estimator performs well for these hyper-parameters if the data is observed by means of .

By construction of , each path of takes values and has constant periods. We claim that the number of constant periods observed in a path of equals a.s. To see this, recall that is independent of and note that, given the information of the path of , if and only if for at least one . (For example, if and the remaining ’s are nonzero and distinct from one another, then .) Moreover,

and the random vector

given is non-degenerate multivariate Gaussian and hence has a continuous distribution, so

which yields a.s. Since is impossible, a.s. In other words, a sample path of is a step function with constant periods that are completely ascribed to the constant periods of the corresponding path of .

With this observation in mind, we estimate the three parameters , and as follows. If an observed path of constantly changes in value (i.e. the values at any two successive time points are distinct), then we consider the number of constant periods to be (i.e. ). In the other extreme case when an observed path stays constant over the entire time period, we set the number of constant periods to be (i.e. ). Due to our observation in the previous paragraph, the obtained number minus 1 for each of the days is considered a realization of the random variable , and consequently, we obtain a total of realizations of . The extremely large value makes the observed value of fall in the interval , and equation (8) gives as the MOM-like estimate for . (For comparison, Cahoy’s, Hill’s and MS estimates are 0.2950, 0.7501 and 0.6605, respectively. The discrepancy between the estimated values indicates the power law distribution may not be an appropriate model here.)

Next, following the idea presented in e.g. [26], we remove all the constant periods from each of the 44 observed paths and record all the jump sizes over the 44 days. Since each jump size is given in the form with , we regard the recorded jump sizes as a random sample drawn from , where . The obtained sample gives the sample mean

and sample standard deviation

. Figure 7 provides a simulated path of the time-changed process