# Estimation of Mittag-Leffler Parameters

We propose a procedure for estimating the parameters of the Mittag-Leffler (ML) and the generalized Mittag-Leffler (GML) distributions. The algorithm is less restrictive, computationally simple, and necessary to make these models usable in practice. A comparison with the fractional moment estimator indicated favorable results for the proposed method.

## Authors

• 4 publications
• ### Estimation Of all parameters in the Fractional Ornstein-Uhlenbeck model under discrete observations

Let the Ornstein-Uhlenbeck process (X_t)_t>0 driven by a fractional Brow...
04/10/2020 ∙ by El Mehdi Haress, et al. ∙ 0

• ### Higher Moment Estimation for Elliptically-distributed Data: Is it Necessary to Use a Sledgehammer to Crack an Egg?

Multivariate elliptically-contoured distributions are widely used for mo...
12/13/2018 ∙ by Zheng Tracy Ke, et al. ∙ 0

• ### Log-moment estimators for the generalized Linnik and Mittag-Leffler distributions with applications to financial modeling

We propose formal estimation procedures for the parameters of the genera...
03/30/2018 ∙ by Dexter O. Cahoy, et al. ∙ 0

• ### A Fractional-Order Normalized Bouc-Wen Model for Piezoelectric Hysteresis Nonlinearity

This paper presents a new fractional-order normalized Bouc-Wen (BW) (FON...
03/10/2020 ∙ by Shengzheng Kang, et al. ∙ 0

• ### A comparison of maximum likelihood and absolute moments for the estimation of Hurst exponents in a stationary framework

The absolute-moment method is widespread for estimating the Hurst expone...
03/05/2020 ∙ by Matthieu Garcin, et al. ∙ 0

• ### Novel Non-Negative Variance Estimator for (Modified) Within-Cluster Resampling

11/28/2019 ∙ by Daniel Xu, et al. ∙ 0

• ### A Gaussian sequence approach for proving minimaxity: A Review

This paper reviews minimax best equivariant estimation in these invarian...
10/04/2018 ∙ by Yuzo Maruyama, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

For the last two decades, the Mittag-Leffler function has gained popularity in many scientific areas. For instance, the Mittag-Leffler (ML) distribution originally introduced by [Pillai(1990)] has now been used to model random phenomena in finance and economics [Meerschaert and Scalas(2006), Scalas(2006)]. In addition, [Kozubowski(2001)] studied the Mittag-Leffler ML

distribution where the integral and series representations of the probability density function are

 i.)fT(t)=1t∞∫0e−ξg[t/(δαξ)]dξandii.)fT(t)=tα−1δαEα,α[−(t/δ)α],

correspondingly, is the scale parameter, and

 Eα,ν(τ)=∞∑k=0τkΓ(ν+kα)

is the generalized Mittag-Leffler function. The ML density function has the Laplace transform

 ˜fα,δ(λ)=∫∞0e−λtfT(t)dt=[1+(δλ)α]−1,

and is showed to be geometric stable. It can also be easily seen as a generalization of the density of an exponential distribution with parameter

. Furthermore, [Kozubowski(2001)] constructed the following structural representation of a ML

distributed random variable

as

 Td=δZR1/α, (1.1)

where has the standard exponential distribution , and has the density function

It is also shown that the fractional moment of the random variable is:

 ETq=qπδqαΓ(1−q)sin(πq/α),0

The parameter estimation problem for this model was first addressed by [Kozubowski(2001)]. They proposed fractional moment estimators for the ML distribution which require appropriate constants prior to the calculation of the estimates. Note that the pre-selection of appropriate values requires information about the true or unknown parameter a priori, which is not feasible in practice. As a direct consequence, it is expected that the above estimators will perform poorly when the restrictions are violated. Thus, it is mainly these drawbacks that motivate us to propose an estimation procedure that avoids this difficulty and uses all the available data possible.

More recently, [Jose et. al(2010)]

used the generalized Mittag-Leffler (GML) distribution in astrophysics and time series modeling. They specifically constructed GML processes which are autoregressive time series models with GML as the stationary marginal distribution. Moreover, the cumulative distribution function of the generalized Mittag-Leffler distribution GML

is given by

 Fα,β(x)=P[X≤x]=∞∑k=0(−1)kΓ(k+β)xα(k+β)Γ(β)k!Γ(1+α(k+β)),x>0.

When

, we get the gamma distribution while

yields [Pillai(1990)]’s ML distribution. If , we get the exponential density. The GML probability density function has the specific form

 fα,β(x)=∫∞0αxαβ−1e−(x/s)αΓ(β)sαβdFSα(s),

where is the cumulative distribution function of a strictly positive stable random variable with as the Laplace transform of the corresponding probability density function. The Laplace transform of the GML probability density function above is This distribution can be considered as the positive counterpart of [Pakes(1998)]’s generalized Linnik distribution with the probability density function having the Laplace transform . Furthermore, the mixture representation of a GML distributed random variable is

 Xd=W1/αSα, (1.3)

where is gamma distributed with scale parameter 1 and shape parameter , and its probability density function given by

 fW(w)=1Γ(β)wβ−1e−w.

The fractional moments of the GML distributed random variable for are derived by [Pillai(1990)] as

 EXq=Γ(1−q/α)Γ(1+q/α)Γ(1−q),

while [Lin(1998a), Lin(1998b)] obtained the expression

 EXq=Γ(1−q/α)Γ(β+q/α)Γ(1−q)Γ(β),−αβ

for and . Note that the moments are infinite for order .

The main goal of this paper is to propose a procedure to estimate the model parameters in the Mittag-Leffler (ML) and the generalized Mittag-Leffler (GML) distributions that uses all available information. This is necessary in order for these models to be usable in practice. The rest of the paper is organized as follows: In Sections 2 and 3, we derive the first few moments of the log-transformed Mittag-Leffler distributed random variables. In Section 4, we propose procedures to estimate the parameters of the ML and the GML distributions. In Section 5, some key points and extensions of our methodology are discussed. In Section 6, some computational test results are shown and interval estimators for the ML parameters are derived using the asymptotic normality of the point estimators.

## 2 Moments of the log-transformed ML random variable T

We now derive the first four log-moments of the random variable . Applying the log-transformation to the mixture representation (1.1), we obtain

 T′d=log(δ)+Z′+1αR′, (2.1)

where , , and . Following [Cahoy et. al(2010)], it is straightforward to show the following first four non-central moments of the random variables and :

 E(Z′)=−γ,E(Z′)2=γ2+π26,
 E(Z′)3=−γ3−γπ22−2ζ(3),E(Z′)4=γ2(γ2+π2)+3π420+8γζ(3),
 E(R′)=0,E(R′)2=π23(1−α2),
 E(R′)3=0,andE(R′)4=π415(7−10α2+3α4).

where is the Euler’s constant, and is the Riemann zeta function evaluated at ,

Taking the expectation of (2.1

) and using the above moments, we get the mean and variance

 μT′=log(δ)−γ,andσ2T′=π26(2α2−1), (2.2)

respectively. Observe that the mean does not involve the parameter which is surprising and is due to the expected value of being zero. Moreover, a similar calculation gives the third and fourth central moments as

 μ′3=E(T′−μT′)3=−2ζ(3),andμ′4=E(T′−μT′)4=π4(α4−20α2+28)60α4,

respectively which will be used in the derivation of the interval estimates in the appendix.

## 3 Moments of the log-transformed GML random variable X

Taking the logarithm of the mixture representation of the GML distributed random variable in (1.3) yields

 X′d=1αW′+S′α,

where , , and is a one-sided -stable distributed random variable with the Laplace transform of the probability density function given as . From [Zolotarev(1986)], the first four log-moments of can be deduced as

 E(S′α)=C(1α−1),E(S′α)2=(1α−1)2C2+π26(1α2−1),
 E(S′α)3=−2(α−1)3C3+Cπ2(α−1)2(1+α)−4(α3−1)ζ(3)2α3,and
 +π4(α−3)(1+α)(3+α)+480C(α3−1)ζ(3))].

But to calculate the first four moments of we also need to know the moments of . The moments of

can now be derived as follows: The characteristic function of

can be easily shown as

 ϕW′(t)=EeitW′=EWit=Γ(β+it)/Γ(β),

where Using the logarithmic expansion of the gamma function ([Abramowitz and Stegun(1965)]), we get the cumulant-generating function

 log(ϕW′(t))=∞∑k=1(it)kk!ck,

where the th cumulant is given by

 ck=ψ(k−1)(β),whereψ(0)(β)=ψ(β).

Please note that the mean and variance of are given by

 μW′=c1=ψ(β),andσ2W′=c2=ψ(1)(β),

which are commonly known as the digamma and trigamma functions, respectively. For , the th cumulant is the polygamma function of order evaluated at . The th integer-order moments can be calculated using the recursive relation

 ψ(k−1)(β)=μ′k−k−1∑j=1(k−1j−1)cjμ′k−j.

This implies that , and so forth. We can now derive estimating equations using the first two moments of . More specifically, it can easily be shown that the mean and variance of are

 μX′=γ(1α−1)+1αψ(β), (3.1)

and

 σ2X′=π26(1α2−1)+1α2ψ(1)(β), (3.2)

correspondingly. However, a more direct procedure is to consider the characteristic function of the log-transformed random variable . This simply suggests that

 ϕX′(t)=EeitX′=Eeit(1αW′+S′α)=EWit/αE(Sα)it=Γ(β+it/α)Γ(β)Γ(it/α)αΓ(it).

Taking the logarithmic expansion of the preceding equation yields the following cumulant-generating function of

 log(ϕX′(t))=4∑k=1(it)kk!dk+∞∑l=5(it)ll!dl,

where the th cumulant is given by

 dk=1αk[ψ(k−1)(β)+(−1)kψ(k−1)(1)(1−αk)],k=1,…,4,andψ(0)(τ)=ψ(τ).

It is easy to check that

 μX′=d1=1α[ψ(β)−ψ(1)(1−α)],andσ2X′=d2=1α2[ψ(1)(β)+ψ(1)(1)(1−α2)],

where and . Also, using the recursive relation between the cumulant and the th moment above, we can easily derive an expression for the third moment of the random variable as

 E(X′)3 =3[(1/α−1)2γ2+π26(1/a2−1)]ψ(β)α+3(1/α−1)γ[ψ(β)2+ψ(1)(β)]α2 +ψ(β)3+3ψ(β)ψ(1)(β)+ψ(2)(β)α3 −2(α−1)3γ+π2(α−1)2(1+α)γ−4(α3−1)ζ(3)2α3.

The expression for the fourth moment easily follows but we omit it here.

## 4 Parameter estimation

### 4.1 Estimation for ML(α,δ) distribution

One way of estimating the parameters of the ML distribution is to derive the method-of-moments estimators using formula (1.2) for the fractional moments as in [Nikias and Shao(1995)] and [Kozubowski(2001)]. That is, select two values of , and say, and compute and numerically using the following two equations:

 ^e(ql)=1nn∑i=1Tql=qlπ^δql^αΓ(1−ql)sin(πql/^α),l=1,2.

We re-emphasize that we need to choose appropriate numbers and beforehand, which are required to be less than to be able to use the fractional moment estimators. This restriction suggests that we need to know or have information about a priori to be able to estimate the parameters and .

To overcome this difficulty, we propose estimators of and based on the mean and variance of the variable . From (2.2), the method-of-moments estimators for and are

 ^αP=2π√2(6^σ2T′+π2),and^δP=exp(^μT′+γ), (4.1)

respectively. Note that and are the sample mean and variance of the log-transformed data , correspondingly. Moreover, the preceding estimators are non-negative as desired. Another advantage of our estimation procedure is that it is computationally simple as we do not need to numerically find the unique solutions of a system of equations as the parameter estimates. The proposed estimators above are also shown to be asymptotically unbiased (see appendix).

We now compare the proposed procedure with the fractional moment estimators ( and ). In particular, we used bias and root-mean-square error (RMSE) as bases for the comparison. Following [Kozubowski(2001)], we assumed , and used the same constants and . The fractional moment estimator of is computed by numerically solving the equation

 ^e(12)(^e(14))2=(Γ(3/4))28^αFsin2(π/4^αF)π3/2sin(π/2^αF).

Then the fractional estimator of is calculated as

 ^δF=4^α2Fsin2(π/2^αF)[^e(12)]2π+[4^αFΓ(3/4)sin(π/4^αF)^e(14)π]42.

In the root calculation above, we used the uniroot function of the statistical software R with tolerance limit of . We also performed 10000 simulation runs for each combination of the , and sample size values. Table 1 in the appendix shows the computational test results. Clearly, the proposed estimators ( and ) outperformed the fractional moment estimators ( and ) even when the sample size is as large as 25000. When , the bias ratio of the proposed to the fractional estimator ranges from 10.77% to 48.64%. This demonstrates the larger bias the fractional estimator is producing in estimating for small samples. However, the bias difference seemingly becomes negligible as the sample size increases. A similar observation can be made for the bias incurred in estimating . The RMSE’s also generally shows that our procedure produces more homogeneous estimators that are closer to the true parameter values than the fractional moment method. These results certainly added another desirable characteristic of the proposed computationally simple approach.

### 4.2 Estimation for GML(α,β) distribution

We now propose a similar estimation procedure for the GML distribution, and compare it with the fractional moment method. Using the mean and variance of the log-transformed GML distributed random variable from Section 3, we can calculate parameter estimates and using the following two equations:

 ^μX′=γ(1^α−1)+1^αPψ(^βP), (4.2)

and

 ^σ2X′=π26(1^α2P−1)+1^α2Pψ(1)(^βP). (4.3)

In this paper, we only consider an approximation based on the first few terms of the series representation of the digamma function for performance comparison. A major advantage of using these estimating equations is that both digamma and trigamma functions are monotone in . Hence, finding the solutions is straightforward. Recall that

 ψ(τ)=log(τ)−1/(2τ)−1/(12τ2)+1/(120τ4)−1/(252τ6)+O(1/τ8).

Thus, we approximate as

 ^ψ(τ)=log(τ)−1/(2τ)−1/(12τ2)+1/(120τ4)−1/(252τ6).

This results to the following approximation of the trigamma function :

 ^ψ(1)(τ)=1/τ+1/(2τ2)+1/(6τ3)−1/(30τ5)+1/(42τ7).

Solving the system of two equations (4.2) and (4.3) using the preceding approximations of the digamma and trigamma functions will yield the parameter estimates for the GML distribution.

For the fractional moment technique, we assumed , and to compute and numerically using the two equations:

 1nn∑i=1Xql=Γ(1−ql/^αF)Γ(^βF+ql/^αF)Γ(1−ql)Γ(^βF),l=1,2.

In the comparison, we used the function optim in R for both procedures with identical tolerance limits () and initial conditions. We also generated 10000 random samples of size each from the GML distribution, and computed the bias and the root-mean-square error (RMSE). The same conclusions from the preceding subsection can be drawn from Table 2 in the appendix. The table clearly shows that the proposed procedure outperformed again the fractional moment method even for large sample sizes.

Overall, Tables 1-2 in the appendix strongly indicate that the proposed point estimators using the log-transformed data performed better in our comparisons.

## 5 Concluding remarks

We have derived the first few moments of the log-transformed Mittag-Leffler distributed random variables. The log-moments led to systems of equations which are then used to estimate the parameters of the ML and GML distributions. A major advantage of our method over the other moment estimators (e.g., fractional moment estimators) is that we do not need to choose appropriate constants (e.g., ) a priori to be able to calculate the parameter estimates. The calculations involved are straightforward. Approximate interval estimates for the parameters of the ML distribution are derived. Furthermore, the testing and comparison have illustrated the superiority of our estimators.

Although some work have already been done, there are still a few things that need to be pursued. For instance, the development of estimators using Hill-type, regression, and likelihood approaches would be a worthy pursuit. The application of these methods in practice particularly in economics and finance would be of interest also.

## 6 Appendix

### 6.2 Interval estimation for the ML(α,δ) distribution

We first study the limiting distribution of our estimators and for the distribution. If we let

 ^μT′=¯¯¯¯¯¯T′=n∑j=1T′jnand^σ2T′=n∑j=1(T′j−¯¯¯¯¯¯T′)2n

then the following weak convergence holds [Ferguson(1996)], i.e.,

 √n(^μT′−μT′^σ2T′−σ2T′)d⟶\textslN⎡⎣(00),⎛⎝σ2T′μ′3μ′3μ′4−σ4T′⎞⎠⎤⎦,

as , where and are defined in Section 2. Using a standard result on asymptotic theory, the weak convergence above implies that

 √n[g(^θn)−g(θ)]d→N[0,˙g(θ)′Σ˙g(θ)],

where is a mapping from and is continuous in a neighborhood of . We now apply this result to the consistent estimator of . Letting

 g(μT′,σ2T′)=exp(μT′+γ).

 ˙g(μT′,σ2T′)=(exp(μT′+γ)0).

This implies that

 √n(^δ−δ)d⟶\textslN[0,σ2δ],

where

 σ2δ =˙g(μT′,σ2T′)′⎛⎝σ2T′μ′3μ′3μ′4−σ4T′⎞⎠˙g(μT′,σ2T′) =π2e2(μT′+γ)6(2α2−1) (1) =π2δ26(2α2−1),

where the last line is obtained by plugging in for . Similarly,

 √n(^α−α) d⟶\textslN⎡⎢ ⎢ ⎢⎣0,π6(32−20α2−α4)5(6σ2T′+π2)3α4⎤⎥ ⎥ ⎥⎦ d⟶\textslN[0,α2(32−20α2−α4)40],

where the final simplification is attained by substituting ,

 g(μT′,σ2T′)=2π√2(6^σ2T′+π2),

and

 ˙g(μT′,σ2T′)=⎛⎜ ⎜⎝0−3√2π(6σ2T′+π2)3/2⎞⎟ ⎟⎠.

Therefore, we have shown that our method-of-moments estimators are normally distributed (asymptotically unbiased) as the sample size

goes large. Consequently, we can now approximate the confidence interval for and as

 ^α±zε/2 ⎷^α2(32−20^α2−^α4)40n,

and

respectively, where is the

th quantile of the standard normal distribution, and

.

## 7 Acknowledgment

The author is grateful to the Associate Editor and the reviewer for their insightful comments and suggestions that led to the improvement of the article. This research was supported under the Louisiana Board of Regents Research Competitiveness Subprogram grant LEQSF(2011-14)-RD-A-15.

## References

• [Abramowitz and Stegun(1965)] Abramowitz, M., Stegun, I.A., 1965. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Dover Publications, Inc, New York, N.Y.
• [Cahoy et. al(2010)] Cahoy, D.O., Uchaikin, V.V., Woyczynski, W.A., 2010. Parameter estimation in fractional Poisson processes. Journal of Statistical Planning and Inference, 140(11), 3106–3120.
• [Ferguson(1996)] Ferguson, T., A Course in Large Sample Theory, Chapman & Hall.
• [Jose et. al(2010)] Jose, K.K., Uma, P., Lekshmi, V.S., Haubold,H.J., 2010. Generalized Mittag-Leffler distributions and processes for applications in astrophysics and time series modeling, Proceed. of the Third UN/ESA/NASA Workshop on the International Heliophysical Year 2007 and Basic Space Science, 79–92.
• [Kozubowski(2001)] Kozubowski, T.J., 2001. Fractional moment estimation of Linnik and Mittag-Leffler parameters, Mathematical and Computer Modelling, 1023–1035.
• [Lin(1998a)] Lin, G.D., A note on the Linnik distributions. Journal of Mathematical Analysis and Applications, 217, 701–706.
• [Lin(1998b)] Lin, G.D., On the Mittag-Leffler distributions. Journal of Statistical Planning Inference, 74,1–9.
• [Meerschaert and Scalas(2006)] Meerschaert, M.M., Scalas, E., 2006. Coupled continuous time random walks in finance. Physica A, 370, 114–118
• [Nikias and Shao(1995)] Nikias, C.L., Shao, M., 1995. Signal Processing with Alpha-Stable Distributions and Applications , Wiley, New York.
• [Pakes(1998)] Pakes, A.G., Mixture representations for symmetric generalized Linnik laws. Statistics & Probability Letters, 37, 213–221.
• [Pillai(1990)] Pillai, R.N., 1990. On Mittag-Leffler distributions. Annals of the Instit