M-estimation in GARCH models without higher order moments

by   Hang Liu, et al.

We consider a class of M-estimators of the parameters of the GARCH models which are asymptotically normal under mild assumptions on the moments of the underlying error distribution. Since heavy-tailed error distributions without higher order moments are common in the GARCH modeling of many real financial data, it becomes worthwhile to use such estimators for the time series inference instead of the quasi maximum likelihood estimator. We discuss the weighted bootstrap approximations of the distributions of M-estimators. Through extensive simulations and data analysis, we demonstrate the robustness of the M-estimators under heavy-tailed error distributions and the accuracy of the bootstrap approximation. In addition to the GARCH (1, 1) model, we obtain extensive computation and simulation results which are useful in the context of higher order models such as GARCH (2, 1) and GARCH (1, 2) but have not yet received sufficient attention in the literature. Finally, we use M-estimators for the analysis of three real financial time series fitted with GARCH (1, 1) or GARCH (2, 1) models.



page 1

page 2

page 3

page 4


R-estimators in GARCH models; asymptotics, applications and bootstrapping

The quasi-maximum likelihood estimation is a commonly-used method for es...

Spliced Binned-Pareto Distribution for Robust Modeling of Heavy-tailed Time Series

This work proposes a novel method to robustly and accurately model time ...

Detecting Anomalous Time Series by GAMLSS-Akaike-Weights-Scoring

An extensible statistical framework for detecting anomalous time series ...

Identification and estimation of Structural VARMA models using higher order dynamics

We use information from higher order moments to achieve identification o...

A Higher-Order Swiss Army Infinitesimal Jackknife

Cross validation (CV) and the bootstrap are ubiquitous model-agnostic to...

Robust discrete choice models with t-distributed kernel errors

Models that are robust to aberrant choice behaviour have received limite...

Truncated, Censored, and Actuarial Payment-type Moments for Robust Fitting of a Single-parameter Pareto Distribution

With some regularity conditions maximum likelihood estimators (MLEs) alw...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Generalized autoregressive conditional heteroscedastic (GARCH) models have been used extensively to analyze the volatility or the instantaneous variability of a financial time series

. A series is said to follow a GARCH  model if


where are unobservable i.i.d. errors with symmetric distribution around zero and


with , , . Mukherjee (2008) proposed a class of M-estimators for estimating the GARCH parameter


based on observations . The M-estimators are asymptotically normal under some moment assumptions on the error distribution and are more robust than the commonly-used quasi maximum likelihood estimator (QMLE). Mukherjee (2020) considered a class of weighted bootstrap methods to approximate the distributions of these estimators and established the asymptotic validity of such bootstrap. In this paper, we apply an iteratively re-weighted algorithm to compute the M-estimates and the corresponding bootstrap estimates with specific attention to Huber’s, - and Cauchy-estimates which were not considered in the literature in details. The iteratively re-weighted algorithm turns out to be particularly useful in computing bootstrap replicates since it avoids the re-computation of some core quantities for new bootstrap samples.

The class of M-estimators includes the QMLE. The asymptotic normality and the asymptotic validity of bootstrapping the QMLE were derived under the finite fourth moment assumption on the error distribution. However, there are other M-estimators such as the -estimator and Cauchy-estimator which are asymptotic normal under mild assumption on the finiteness of lower order moments. Since heavy-tailed error distributions without higher order moments are common in the GARCH modeling of many real financial time series, it becomes worthwhile to use these estimators for such series but unfortunately they have not been investigated in the literature. One of the contributions of this paper is to reveal precisely the importance of such alternative M-estimators to analyze financial data instead of using the QMLE.

In an earlier work, Muler and Yohai (2008) analyzed the Electric Fuel Corporation (EFCX) time series and fitted a GARCH (1, 1) model. Using exploratory analysis, they detected presence of outliers and considered estimation of parameters based on robust methods. It turned out that estimates based on different methods vary widely and so it is difficult to assess which method should be relied on in similar situations. In this paper, we use M-estimates with mild assumptions on error moments to analyze the EFCX series.

Francq and Zakoïan (2009) underscored the importance of using higher order GARCH models such as GARCH (2, 1) for some real financial time series but the computation and simulation results for such models are not available widely in the literature. We investigate the role of M-estimators for the GARCH (2, 1) model through extensive simulations and real data analysis. We also provide simulation results and analysis for the GARCH (1, 2) model.

The paper is organized as follows. Sections 2 and 3 set the background. In particular, we discuss the class of M-estimators and give examples in Section 2. Section 3 contains bootstrap formulation and the statement on the asymptotic validity of the bootstrap. Section 4 discusses computational aspects of M-estimators and its bootstrapped replicates. Section 5 reports simulation results for various M-estimators. Section 6 compares bootstrap approximation with the asymptotic normal approximation to distributions of M-estimators through simulation. Section 7 analyzes three real financial time series.

2 M-estimators of the GARCH parameters

Throughout this paper, for a function , we use to denote its derivative whenever it exists. Also, . For , . Moreover, will denote a generic r.v. having same distribution as errors of (1.1).


be an odd function which is differentiable at all but finite number of points. Let

denote the set of points where is differentiable and let denote its complement. Let , so that is symmetric. The function is called the score function of the M-estimation in the scale model. Examples are as follows.

Example 1. QMLE score: Let . Then .

Example 2. LAD score: Let . Then and .

Example 3. Huber’s score: Let , where is a known constant. Then and .

Example 4. Score function for the maximum likelihood estimation (MLE): Let , where is the true density of , assumed to be known. Then .

Example 5. score: Let , where is a known constant. Then and is bounded.

Example 6. Cauchy score: Let . Then is bounded.

Example 7. Score function for the exponential pseudo-maximum likelihood estimation: Let , where and are known constants. Here and .

Assume that for some and ,


Then of (1.2) has the following unique almost sure representation:


where are defined in (2.9)-(2.16) of Berkes et al. (2003).

Let be a compact subset of . A typical element in is denoted by

. Define the variance function on



where the coefficients are given in Berkes et al. (2003) (Section 3, and display (3.1)) with the property


Hence the variance functions satisfy , . Using (2.4), (1.1) can be rewritten as


Consider observable approximation of the process of (2.3) defined by


Then an M-estimator is defined as the solution of , where


Next we describe the iterative relation of that is used to write computer program for their numerical evaluation. The computation is discussed in Section 4.

Example 1. GARCH model: With ,

Example 2. GARCH model: With ,


Example 3. GARCH model: With ,


Example 4. GARCH model: With ,


2.1 Asymptotic distribution of

The asymptotic distribution of is derived under the following assumptions.

Model assumptions: The parameter space is a compact set and its interior contains both and of (1.3) and (2.10), respectively. Moreover, (2.1), (2.3) and (2.5) hold and is stationary and ergodic.

Conditions on the score function:

Identifiability condition: Corresponding to the score function , there exists a unique number satisfying


Moment conditions:


Also various Smoothness conditions on as in Mukherjee (2008) are assumed which are satisfied in all examples of considered above. Define the score function factor

the matrix

and the transformed parameter

Theorem 2.1.

Suppose that the model assumptions, identifiability condition, moment conditions and smoothness conditions hold. Then


Note that used in above formulas are given by (i) for the QMLE, (ii) for the LAD while for the Huber, -estimator, Cauchy and other scores, does not have closed-form expression. For such score functions, is calculated using (2.8) as follows. We fix a large positive integer and generate from the error distribution considered for the simulation. Then, using the bisection method on , we solve the equation

Values of computed in this way were provided in Mukherjee (2008, page 1541) for some error distributions and score functions. In Table 1 we provide for few more error distributions and score functions such as Huber’s -score and -estimator with and which are used in simulations and data analysis of later sections. In the sequel, Double exponential is abbreviated as DE.

Huber’s -estimator Cauchy
Normal 0.825 1.692 0.377
DE 0.677 1.045 0.207
Logistic 0.781 1.487 0.316
0.533 0.850 0.172
0.204 0.274 0.053
Table 1: Values of for M-estimators (Huber, -, Cauchy) under various error distributions.

3 Bootstrapping M-estimators

Let be a triangular array of r.v.’s such that for each , are exchangeable and independent of the data and errors . Also, , and .

Based on these weights, bootstrap estimate is defined as the solution of , where


Examples. From many different choices of bootstrap weights, we consider the following three schemes for comparison.

(i) Scheme M. The sequence of weights has a multinomial distribution, which is essentially the classical paired bootstrap.

(ii) Scheme E. When , where are i.i.d. exponential r.v. with mean . Under scheme E, is a weighted M-estimator with weights proportional to , .

(iii) Scheme U. When , where are i.i.d. uniform r.v. on . Under scheme U, is a weighted M-estimator with weights proportional to , .

A host of other bootstrap methods in the literature are special cases of the above bootstrap formulation. Such general formulation of weighted bootstrap offers a unified way of studying several bootstrap schemes simultaneously. See, for example, Chatterjee and Bose (2005) for details in other contexts.

We assume that the weights satisfy the following basic conditions (Conditions BW of Chatterjee and Bose (2005)) where and is a constant.


Under (3.2) and some additional smoothness and moment conditions in Mukherjee (2020), weighted bootstrap is asymptotic valid.

Theorem 3.1.

For almost all data, as ,


We remark that since

, the rate of convergence of the bootstrap estimator is the same as that of the original estimator. The standard deviation of the weights

at the denominator of the scaling reflects the contribution of the corresponding weights.

The distributional result of (3.3

) is useful for constructing the confidence interval of the GARCH parameters as follows. Let

denote the number of bootstrap replicates, denote a generic parameter (one of , or ) and let and denote its M-estimator and -th bootstrap estimator (), respectively. Let be one of , or , as appropriate, which has a known value for a simulation experiment. Using the approximation of by , the bootstrap confidence interval of is of the form


where is the

-th quantile of the numbers

. Consequently, the bootstrap coverage probability is computed by the proportion of the above set of

confidence intervals containing .

Similarly, using (2.11) of Theorem 3.1, we can obtain the confidence interval of based on the asymptotic normality of , and this will be called the normal confidence interval. Specifically, in view of Proposition 3.1 of Mukherjee (2008) on the estimation of the variance-covariance matrix , we can obtain the asymptotic confidence interval of as


where is the estimated variance of obtained from the appropriate diagonal entry of the estimator of and is the

-th quantile of the standard normal distribution.

In the following Section 6, we will compare the accuracy of the confidence intervals constructed by the bootstrap and asymptotic approximations.

4 Algorithm

We discuss the implementation of an iteratively re-weighted algorithm proposed in Mukherjee (2020) for computing M-estimates. In particular, we highlight -estimate and Cauchy-estimate of the GARCH parameters in this paper as their asymptotic distributions are derived under mild moment assumptions. We also consider the bootstrap estimators based on the corresponding score functions.

4.1 Computation of M-estimates

For the convenience of writing, let for . Using a Taylor expansion of , we obtain the following recursive equation for computing the updated estimate of from the current estimate of :


where under smoothness conditions on . Since the GARCH residuals estimate only , in general, we cannot estimate from the data. Therefore, we use ad hoc techniques such as simulating from or standardized DE distribution and then use to carry out the iteration. Note that if the iteration in (4.1) converges then . Therefore in this case from (4.1), and hence is the desired . Based on our extensive simulation study and real data analysis, the algorithm is robust enough to converge to the same value of irrespective of different values of the unknown factor used in computation.

In the following examples, we discuss (4.1) when specialized to the M-estimators computed in this paper.

QMLE: Here and . Hence and


can be computed iteratively as

Note that when , this is same as the formula obtained through the BHHH algorithm proposed by Berndt et al. (1974).

LAD: Here and . Hence and


can be computed iteratively as

Huber: Here and





can be computed iteratively as

-estimator: Here and . Hence



can be computed iteratively as

Cauchy-estimator: Here and . Hence



can be computed iteratively as

4.2 Computation of bootstrap M-estimates

Here the relevant function is defined in (3.1) and the bootstrap estimate can be computed using the updating equation