1. Introduction
1.1. Generalized asymmetric Laplace (variancegamma) distributions
In recent decades, a family of generalized asymmetric Laplace (GAL), or variancegamma (VG
) distributions has gathered the attention of researchers. A random variable
with distribution from this family has characteristic function (Fouier transform)(1) 
Here, are realvalued parameters, with restrictions
. Its probability density function is also known but takes a rather complicated form involving the modified Bessel function of the second kind. This family was introduced into the financial literature in a seminal work of
[8] under the name variancegamma and later independently in [5] under the name generalized asymmetric Laplace; see a more detailed exposition of a multivariate version in [6]. The class of asymmetric Laplace distributions was studied in the book [4], which also mentions GAL. This random variable has moments of all orders, and finite moment generating function (MGF) for in a neighborhood of zero. Its tails are heavier than Gaussian. It can be represented as a meanvariance Gaussian mixture:(2) 
In addition, this distribution is infinitely divisible: For each we can represent
(on a certain probability space), where
are independent and identically distributed (IID). Moreover, each has a distribution which belongs to the same family, with changed parameters. Thus we can create a Lévy process with increments distributed as GAL. This is called Laplace motion, [10]. From (2), we can derive that this process can be represented as a Brownian motion (with nontrivial drift and diffusion) subordinated by a gamma process(a nondecreasing Lévy process with increments having gamma distributions):
. These properties make this family valuable for financial modeling, as in [13]; see applications to option pricing in [7] and fitting financial data in [14].A generalization of this family are the generalized NormalLaplace (GNL) random variables, which can be represented as a sum of two independent random variables: , with GAL and . This is a 5parameter family of distributions, introduced in [11]. Many important properties of the GAL distribution: finite moments, heavierthanGaussian tails, infinite divisibility, remain true. Its characteristic function is derived from (1):
(3) 
The corresponding Lévy process with increments having these distributions is a sum of two independent processes: Laplace and Brownian motions, see [12]. We can similarly represent GNL variables as a meanvariance Gaussian mixture, as in (2).
1.2. Estimation troubles
Parameter estimation, however, remains difficult for both families of distribution. A direct maximum likelihood estimation (MLE) is hard because of a complicated density formula for GAL and even more so for GNL; see various methods in the thesis [18]
and a similar approach to autoregressive models in
[9]. Representation as in (2) opens the door for expectationmaximization algorithm (
EM). Characteristic functions as in (1) or (3) have a relatively simple form. One could try to use them to estimate. But this was not yet done. The thesis [18] contains several estimation methods which are modifications of MLE.Finally, in some literature, perhaps the simplest method was used: method of moment estimation (MME). We compute the first 4 moments of GAL or the first 5 moments of GNL. We solve the resulting system of equations explicitly. This gives us an expression of parameters via moments. Finally, we substitute empirical moments in place of exact ones. This gives us parameter estimates. See [17] for MME and generalizations for VG distributions, [12, Section 2.1] for GNL, and [16] for autoregression of order with innovations distributed as GNL. In addition, [14]
shows MME for skewness parameter
approaching zero, and removing terms of order and higher from expressions of moments.It is straightforward to show that these estimates are consistent (converge to the true values as the sample size tends to infinity) and asymptotically normal: as , where is the limiting variancecovariance matrix. However, these estimates are not efficient: This limiting matrix is larger than the one for the MLE.
In several articles, the MME is mentioned for estimation of these distributions and related time series models. This gives an impression that MME is acceptable for parameter estimation for these families. However, this question was not studied thoroughly and rigorously in the existing literature. To the best of our knowledge, no simulation or theoretical study was conduced to study actual applicability and to quantify errors for this method. This short note fills this gap.
1.3. Our contributions
In this article, we focus on the symmetric variancegamma (generalized Laplace) distribution, where in (1). We provide a simulation study which shows that errors in MME are too large to use this method. In addition, we get theoretical results: asymptotic normality, with covariance matrix too large for any meaningful inference. This shows that implicit assumptions in the existing literature about applicability of the delta method are false. We have every reason to believe that if this is true for the symmetric variancegamma case, this is even more so for more general cases: GAL and GNL.
However, we modify MME to make it work. Specifically, we take absolute moments . We find their expression using parameters and solve for these parameters. Thus we modify MME to decrease the limiting covariance matrix and make errors smaller. Our simulations show that this method works better than the original MME.
We do not intend to compare this modified MME with the classic MLE: the latter will always be more efficient. However, MLE is difficult to implement in practice, unlike MME and its modifications. We take a wellknown simple MME, which was assumed to work for GAL and GNL distributions. We use theory and simulation to show that it does not work well even for symmetric versions of these distributions. Then we use absolute moments to improve efficiency for GAL. Finally, we suggest some venues for further research.
1.4. Organization of this article
Section 2 is devoted to the classic MME. We take symmetric VG and GNL distributions. We state and prove asymptotic results, and use them together with simulations to show this MME does not work well. Section 3 studies modifications of MME with absolute moments for symmetric VG distributions. It is here that we make the main positive contribution of this short note. Section 4 contains conclusions and some suggestions for subsequent research.
2. Classic Method of Moments Estimation
2.1. Symmetric VarianceGamma
First, we define the family of distributions which we deal with. As discussed in the Introduction, for simplicity we deal with a subset of GAL family: symmetric variancegamma (SVG) or generalized Laplace. We show the classic MME fails for these distributions.
The SVG distribution corresponds to setting in (1), and we have the following fundamental representation in terms of gamma and normal random variables (see [4, Proposition 4.1.2]). Fix parameters . Then a SVG random variable can be written as
(4) 
where and are independent.
The SVG distribution is symmetric with respect to zero. All its odd moments are zero:
for . With the representation (4), moments of the SVG can easily calculated using standard formulas for the moments of the gamma and normal distributions (see
[4, Proposition 4.1.6]). In particular, the variance and fourth moment are given by(5) 
With the formulas in (5), we can develop a classic method of moments. Assume are i.i.d. from this distribution. Define empirical second and fourth moments:
(6) 
Theorem 1.
The MME estimates are given by
(7) 
They are consistent: almost surely as , and asymptotically normal: , where , with
Proof.
Just solve for and this system of two equations (5):
(8) 
Next, plug in instead of , and instead of
. By the Strong Law of Large Numbers,
a.s. and a.s. as . Next, in (8) is a continuous function. Thus we get almost surely. This proves consistency. Next, we state the Central Limit Theorem:
since
are i.i.d. random vectors with finite covariance matrix. Now, let us find the limiting covariance matrix
:(9) 
Using the representation (4) and standard formulas for moments of gamma and normal random variables, or appealing to [4, Proposition 4.1.6], we have the following formulas for the moments of orders 6 and 8:
(10) 
Plug these moments (10), together with and from (5), into the matrix from (9):
Finally, let us compute the Jacobian of the function from (8):
(11) 
Plugging and from (5) into (11), we get:
Applying the bivariate method to , as in [1, Section 5.5], we get the CLT for with limiting covariance matrix . ∎
Unfortunately, this limiting covariance matrix is very large. Reason why: because and especially are large. This means estimates and are poor quality.
Example 1.
Try . Then
For example, if
, then the standard deviation of the estimate
is . This implies that the estimate is low quality.In addition, we performed simulations to assess the quality of the MME (7): Fix and repeat times the following procedure.

Repeat times the following procedure:

Generate a sample of symmetric variancegamma random variables.

Compute estimates from (7).

Denote them to be for the th iteration.


Find the average over all estimates and .
10  1000  2  2.247  0.5095  0.5  0.2544  0.0636 
100  1000  0.5  0.571  0.0364  2  3.9062  4.6606 
200  1000  1  1.1073  0.1234  0.5  0.1226  0.1431 
500  1000  3  3.5765  2.8928  1  2.9569  4.3255 
Table 1 shows the results. We see that the estimates for are close to the original parameter, whereas the estimates for are not. Also, these estimates are not very reliable as when and , the method of moment estimation tends to fall apart and estimates from the method deviate wildly from the original parameters.
2.2. Symmetric GaussianLaplace
Now, we replicate our results for the case of symmetric generalized normalLaplace, or, shortly, symmetric GaussianLaplace (SGL). These random variables can be represented as with independent and standard normal . This is a 3parameter family of symmetric distributions. Like the SVG distributions, these have finite moments of any order.
Define and to be variance (second moment) and fourth moment, similarly to the previous subsection. Next, define . Switching from theoretical to empirical moments, define and as in (6). Let be the empirical sixth moment, defined similarly to (6). Then we can state and prove our main theoretical result.
Lemma 2.
The standard MME based on moments of orders 2, 4, 6 gives us
(12) 
These estimates are consistent and asymptotically normal.
Proof.
We only sketch the proof and leave details to the reader. Compute the moments of orders 2, 4, and 6:
(13) 
Solving this system of equations for , we get the following: , where is the function already given in (12), with instead of their empirical versions. By the Strong Law of Large Numbers, almost surely as . The function is continuous; therefore, almost surely as . This proves consistency. Further, the function is differentiable: Using the multivariate method and the Central Limit Theorem for , we complete the proof of asymptotic normality. ∎
Let us repeat the simulation as in the previous subsection, with and . We choose parameters and . The (averaged over simulations) mean squared error for is , but for is , and for is . This is for a large sample size , which shows poor quality of MME estimates from (12).
3. Modified Method of Moments
3.1. Absolute moments
Define the absolute moment of order as
Here, can be any (even noninteger) positive number. For even , this is just a regular th moment. Define the gamma function: , . The following formula for was stated and proved in [2, Proposition 2.2]:
(14) 
3.2. The ratio method
Fix a . Use the following quantity to find a point estimate of :
(15) 
We can solve this with respect to and replace in the absolute moments with their empirical versions. This will give us method of moments estimates.
Theorem 3.
Example 2.
Taking , we get:
Proof.
Step 1. Let us show the consistency of . Using the lemma above, we write (15) as
Using the gamma function property that , we get: . Solve for to get:
(19) 
By the Strong Law of Large Numbers, a.s. as . Thus, a.s. . Comparing (16) with (19), we complete the proof of consistency for .
Step 2. To show asymptotic normality (17), consider the threedimensional random vector: , for each . It satisfies the Central Limit Theorem with limiting covariance matrix
Let us now compute elements of this matrix: . Thus we get:
Next, consider the function . We have:
(20) 
Applying method, as in [1, Section 5.5], we get the asymptotic normality for (20) too: ,
(21) 
We can compute this gradient:
(22) 
Finally, we can compute the limiting variance of : and , with . Applying method to this function as in [1, Section 5.5], we get (17) with .
Step 3. The second absolute moment coincides with the variance: . Thus we get: . Next, almost surely as . This proves the consistency of .
Example 3.
Fix . Then
Therefore, . Taking , we get:
Next, . After computation, we get: and . These limiting variances are lower than and , respectively: the limiting variances for and from the classic MME in Example 1.
3.3. Modified ratio method
Consider another modification of the MME with absolute moments: Here we use only the first two moments. From (14), we get: , and . We can solve for :
Taking logarithm, we get:
Lemma 4.
The function is a onetoone strictly decreasing smooth mapping from onto , with for .
Proof.
Step 1. Let us show that for all . Consider the digamma function . We can write the derivative of as . Applying the twosided inequality , from [3], we get the upper estimate:
(23) 
The derivative of is thus is increasing. Next, as . Thus for all . Combining this observation with (23), we complete the proof that .
Step 2. Let us show that : It follows from convergence
and as .
Step 3. Finally, let us show . Indeed, as , we have
So , thus . ∎
Lemma 4 allows us to estimate as follows: We can define the inverse function to . It is continuous and smooth. Thus we can define the following estimates for and :
(24) 
Theorem 5.
The estimates in (24) are consistent and asymptotically normal.
Proof.
The proof follows the same technique and pattern as the previous proofs in this article: We use continuity and smoothness of the mapping
which maps into and similarly into . ∎
Example 4.
We can now compute the asymptotic variance for the estimates (24) in the case , to compare it with classic MME and its modification in the previous subsection. First, in this case and , therefore and . We note that and therefore . Next, the gradient of the function (the first component of ) is equal to . At and this is equal to . Thus the gradient of the function is equal to . By the Central Limit Theorem, as found in Example 1,
By the delta method, , with . Similarly, the gradient of (the second component of ) at is equal to . Applying the delta method again, we get: , with . This method does estimate very well, but the estimate for is worse than even for the classic MME from Example 1, even more so for the modified MME from Example 3.
4. Conclusion and Further Research
We tested the classic MME for two classes of symmetric continuous distributions: symmetric variancegamma and symmetric GaussianLaplace. Using simulation and method, we showed that this method gives poor results and is not applicable in practice. This runs contrary to some remarks in the existing literature, some of which was cited in the Introduction, that MME works for variancegamma or generalized normalLaplace distributions (and time series models based on these), at least when these distributions are close to symmetric.
However, in this article, we produced positive results too. We modified MME for symmetric variancegamma by switching to absolute moments instead of regular ones. The resulting estimates are more efficient than for the classic MME. This suggests replacing regular moments with absolute ones for other symmetric distributions.
In this short note, we left some questions unanswered. We did not investigate for which we minimize limiting variances and . Another problem is the extension of this modified MME for symmetric GaussianLaplace distributions, and for their asymmetric versions. This would require choosing three orders of absolute moments so that we can solve this system of three equations for the parameters. The authors did not find a simple way to do this, so this is left for future research.
References
 [1] George Casella, Roger L. Berger (2001). Statistical Inference, Brooks/Cole, Cengage Learning. Second edition.
 [2] Robert E. Gaunt (2020). Wasserstein and Kolmogorov Error Bounds for VarianceGamma Approximation via Stein’s Method I. Journal of Theoretical Probability 33, 465–505.
 [3] BaiNi Guo, Feng Qi (2011). An extension of an inequality for ratios of gamma functions. Journal of Approximation Theory 163 1208–1216.
 [4] Samuel Kotz, Tomasz Kozubowski, Krzysztof Podgórski (2001). The Laplace Distribution and Generalizations. Birkhäuser.
 [5] Tomasz J. Kozubowski, Krzysztof Podgórski (2000). A Multivariate and Asymmetric Generalization of Laplace Distribution. Computational Statistics 15, 531–540.

[6]
Tomasz J. Kozubowski, Krzysztof Podgórski, Igor Rychlik (2013). Multivariate Generalized Laplace Distribution and Related Random Fields.
Journal of Multivariate Analysis
113, 59–72.  [7] Dilip B. Madan, Peter P. Carr, Eric C. Chang (1998). The Variance Gamma Process and Option Pricing. European Finance Review 2, 79–105.
 [8] Dilip B. Madan, Eugene Seneta (1990). The Variance Gamma Model for Share Market Returns. The Journal of Business 63, 511–524.
 [9] Thanakorn Nitithumbundit, Jennifer S. K. Chan (2020). ECM Algorithm for AutoRegressive Multivariate Skewed Variance Gamma Model with Unbounded Density. Methodology and Computing in Applied Probability 22, 1169–1191.
 [10] Krzysztof Podgórski, Jörg Wegener (2011). Estimation for Stochastic Models Driven by Laplace Motion. Communications in Statistics – Theory and Methods 40, 3281–3302.
 [11] William J. Reed (2006). The NormalLaplace Distribution and Its Relatives. Advances in Distribution Theory, Order Statistics, and Inference. Birkhäuser.
 [12] William J. Reed (2007). Brownian–Laplace Motion and Its Use in Financial Modelling. Communications in Statistics – Theory and Methods 36, 473–484.
 [13] Wim Schoutens (2003). Lévy Processes in Finance. Wiley.
 [14] Eugene Seneta (2004). Fitting the VarianceGamma Model to Financial Data. Journal of Applied Probability 41A, 177–187.
 [15] Elian M. Stein, Rami Shakarchi (2003). Complex Analysis. Princeton University Press.
 [16] Lishamol Tomy, Kanichukattu Korakutty Jose (2009). Generalized NormalLaplace AR Process. Statistics & Probability Letters 79, 1615–1620.
 [17] Annelies Tjetjep, Eugene Seneta (2006). Skewed Normal VarianceMean Models for Asset Pricing and the Method of Moments International Statistical Review 74, 109–126.
 [18] Fan Wu (2008). Applications of The Normal Laplace and Generalized Normal Laplace Distributions. Ph.D. Thesis. University of Victoria.