Different types of mixture models are in widespread use in various fields. Overviews of mixture models can be found, for example, in the monographs of McLachlan & Peel (2000) and Frühwirth-Schnatter (2006). In this paper, we are concerned with mixture autoregressive models that were introduced by Le et al. (1996) and further developed by Wong & Li (2000, 2001a, 2001b) (for further references, see Kalliovirta et al. (2015)).
In mixture autoregressive models the conditional distribution of the present observation given the past is a mixture distribution where the component distributions are obtained from linear autoregressive models. The specification of a mixture autoregressive model typically requires two choices: choosing a conditional distribution for the component models and choosing a functional form for the mixing weights. In a majority of existing models a Gaussian distribution is assumed whereas, in addition to constants, several different time-varying mixing weights (functions of past observations) have been considered in the literature.
Instead of a Gaussian distribution, Wong et al. (2009) proposed using Student’s –distribution. A major motivation for this comes from the heavier tails of the –distribution which allow the resulting model to better accommodate for the fat tails encountered in many observed time series, especially in economics and finance. In the model suggested by Wong et al. (2009)
, the conditional mean and conditional variance of each component model are the same as in the Gaussian case (a linear function of past observations and a constant, respectively), and what changes is the distribution of the independent and identically distributed error term: instead of a standard normal distribution, a Student’s–distribution is used. This is a natural approach to formulate the component models and hence also a mixture autoregressive model based on the –distribution.
In this paper, we also consider a mixture autoregressive model based on Student’s –distribution, but our specification differs from that used by Wong et al. (2009). Our starting point is the characteristic feature of linear Gaussian autoregressions that stationary distributions (of consecutive observations) as well as conditional distributions are Gaussian. We imitate this feature by using a (multivariate) Student’s –distribution and, as a first step, construct a linear autoregression in which both conditional and (low-dimensional) stationary distributions have Student’s –distributions. This leads to a model where the conditional mean is as in the Gaussian case (a linear function of past observations) whereas the conditional variance is no longer constant but depends on a quadratic form of past observations. These linear models are then used as component models in our new mixture autoregressive model which we call the StMAR model.
Our StMAR model has some very attractive features. Like the model of Wong et al. (2009), it can be useful for modelling time series with regime switching, multimodality, and conditional heteroskedasticity. As the conditional variances of the component models are time-varying, the StMAR model can potentially accommodate for stronger forms of conditional heteroskedasticity than the model of Wong et al. (2009). Our formulation also has the theoretical advantage that, for a th order model, the stationary distribution of consecutive observations is fully known and is a mixture of particular Student’s –distributions. Moreover, stationarity and ergodicity are simple consequences of the definition of the model and do not require complicated proofs.
Finally, a few notational conventions. All vectors are treated as column vectors and we writefor the vector where the components may be either scalars or vectors. The notation signifies that the random vector has a –dimensional Gaussian distribution with mean and (positive definite) covariance matrix . Similarly, by we mean that has a –dimensional Student’s –distribution with mean , (positive definite) covariance matrix
, and degrees of freedom(assumed to satisfy ); the density function and some properties of the multivariate Student’s –distribution employed are given in an Appendix. The notation is used for a –dimensional vector of ones, signifies the vector of dimension
, and the identity matrix of dimensionis denoted by . The Kronecker product is denoted by , and stacks the columns of matrix on top of one another.
2 Linear Student’s autoregressions
In this section we briefly consider linear th order autoregressions that have multivariate Student’s –distributions as their stationary distributions. First, for motivation and to develop notation, consider a linear Gaussian autoregression () generated by
where the error terms are independent and identically distributed with a standard normal distribution, and the parameters satisfy , , and , where
is the stationarity region of a linear th order autoregression. Denoting and , it is well known that the stationary solution to (1) satisfies
where the last relation defines the conditional distribution of given and the quantities , , , , and are defined via
It is not immediately obvious that linear autoregressions based on Student’s –distribution with similar properties exist (such models have, however, appeared at least in Spanos (1994) and Heracleous & Spanos (2006)). Suppose that for a random vector in it holds that where (and other notation is as above in (4)). Then (for details, see the Appendix) the conditional distribution of given is , where
We now state the following theorem (proofs of all theorems are in the Supplementary Material).
Suppose , , , and . Then there exists a process () with the following properties.
(i) The process ( ) is a
Markov chain on
) is a Markov chain onwith a stationary distribution characterized by the density function . When , we have, for , that and the conditional distribution of given is
Results (i) and (ii) in Theorem 1 are comparable to properties (3) and (1) in the Gaussian case. Part (i) shows that both the stationary and conditional distributions of are –distributions, whereas part (ii) clarifies the connection to standard AR() models. In contrast to linear Gaussian autoregressions, in this –distributed case is conditionally heteroskedastic and has an ‘AR()–ARCH()’ representation (here ARCH refers to autoregressive conditional heteroskedasticity).
3 A mixture autoregressive model based on Student’s –distribution
3.1 Mixture autoregressive models
Let () be the real-valued time series of interest, and let denote the –algebra generated by . We consider mixture autoregressive models for which the conditional density function of given its past, , is of the form
where the (positive) mixing weights are –measurable and satisfy (for all ), and the , , describe the conditional densities of autoregressive component models. Different mixture models are obtained with different specifications of the mixing weights and the conditional densities .
Starting with the specification of the conditional densities , a common choice has been to assume the component models to be linear Gaussian autoregressions. For the th component model (), denote the parameters of a th order linear autoregression with , , and . Also set . In the Gaussian case, the conditional densities in (8) take the form ()
where signifies the density function of a standard normal random variable, is the conditional mean function (of component ), and is the conditional variance (of component ), often assumed to be constant. Instead of a Gaussian density, Wong et al. (2009) consider the case where is the density of Student’s –distribution with conditional mean and variance as above, and a constant , respectively.
In this paper, we also consider a mixture autoregressive model based on Student’s –distribution, but our formulation differs from that used by Wong et al. (2009). In Theorem 1 it was seen that linear autoregressions based on Student’s –distribution naturally lead to the conditional distribution in (6). Motivated by this, we consider a mixture autoregressive model in which the conditional densities in (8) are specified as
where the expressions for and are as in (5) except that is replaced with and all the quantities therein are defined using the regime specific parameters , , , and (whenever appropriate a subscript is added to previously defined notation, e.g., or ). A key difference to the model of Wong et al. (2009) is that the conditional variance of component is not constant but a function of . An explicit expression for the density in (9) can be obtained from the Appendix and is
where (and signifies the gamma function).
Now consider the choice of the mixing weights in (8). The most basic choice is to use constant mixing weights as in Wong & Li (2000) and Wong et al. (2009). Several different time-varying mixing weights have also been suggested, see, e.g., Wong & Li (2001a), Glasbey (2001), Lanne & Saikkonen (2003), Dueker et al. (2007), and Kalliovirta et al. (2015, 2016).
where the , , are unknown parameters satisfying . Note that the Student’s density appearing in (11) corresponds to the stationary distribution in Theorem 1(i): If the ’s were generated by a linear Student’s autoregression described in Section 2 (with a subscript added to all the notation therein), the stationary distribution of would be characterized by . Our definition of the mixing weights in (11) is different from that used in Glasbey (2001) and Kalliovirta et al. (2015) in that these authors employed the density (corresponding to the stationary distribution of a linear Gaussian autoregression) instead of the Student’s density we use.
3.2 The Student’s mixture autoregressive model
Equations (8), (9), and (11) define a model we call the Student’s mixture autoregressive, or StMAR, model. When the autoregressive order or the number of mixture components need to be emphasized we refer to an StMAR(,) model. We collect the unknown parameters of an StMAR model in the vector (), where (with , , and ) contains the parameters of each component model () and the ’s are the parameters appearing in the mixing weights (11); the parameter is not included due to the restriction .
The StMAR model can also be presented in an alternative (but equivalent) form. To this end, let
signify the conditional probability of the indicated event given, and let be a sequence of independent and identically distributed random variables with a distribution such that is independent of (). Furthermore, let be a sequence of (unobserved) –dimensional random vectors such that, conditional on , and are independent (for all ). The components of are such that, for each , exactly one of them takes the value one and others are equal to zero, with conditional probabilities , . Now can be expressed as
where is as in (9). This formulation suggests that the mixing weights can be thought of as (conditional) probabilities that determine which one of the autoregressive components of the mixture generates the observation .
It turns out that the StMAR model has some very attractive theoretical properties; the carefully chosen conditional densities in (9) and the mixing weights in (11) are crucial in obtaining these properties. The following theorem shows that there exists a choice of initial values such that is a stationary and ergodic Markov chain. Importantly, an explicit expression for the stationary distribution is also provided.
The stationary distribution of is a mixture of –dimensional –distributions with constant mixing weights
. Hence, moments of the stationary distribution of order smaller thanexist and are finite. As can be seen from the proof of Theorem 2 (in the Supplementary Material), the stationary distribution of the vector is also a mixture of –distributions with density of the same form, . Thus the mean, variance, and first autocovariances of are (here the connection between and is as in (4))
Subvectors of also have stationary distributions that belong to the same family (but this does not hold for higher dimensional vectors such as ).
The fact that an explicit expression for the stationary (marginal) distribution of the StMAR model is available is not only convenient but also quite exceptional among mixture autoregressive models or other related nonlinear autoregressive models (such as threshold or smooth transition models). Previously, similar results have been obtained by Glasbey (2001) and Kalliovirta et al. (2015) in the context of mixture autoregressive models that are of the same form but based on the Gaussian distribution (for a few rather simple first order examples involving other models, see Tong (2011, Section 4.2)).
From the definition of the model, the conditional mean and variance of are obtained as
Except for the different definition of the mixing weights, the conditional mean is as in the Gaussian mixture autoregressive model of Kalliovirta et al. (2015). This is due to the well-known fact that in the multivariate –distribution the conditional mean is of the same linear form as in the multivariate Gaussian distribution. However, unlike in the Gaussian case, the conditional variance of the multivariate –distribution is not constant. Therefore, in (13) we have the time-varying variance component which in the models of Kalliovirta et al. (2015) and Wong et al. (2009) is constant (in the latter model the mixing weights are also constants). In (13) both the mixing weights and the variance components are functions of , implying that the conditional variance exhibits nonlinear autoregressive conditional heteroskedasticity. Compared to the aforementioned previous models our model may therefore be useful in applications where the data exhibits rather strong conditional heteroskedasticity.
The parameters of an StMAR model can be estimated by the method of maximum likelihood (details of the numerical optimization methods employed and of simulation experiments are available in the Supplementary Material). As the stationary distribution of the StMAR process is known it is even possible to make use of initial values and construct the exact likelihood function and obtain exact maximum likelihood estimates. Assuming the observed dataand stationary initial values, the log-likelihood function takes the form
An explicit expression for the density appearing in (15) is given in (10), and the notation for and is explained after (9). Although not made explicit, , , and , as well as the quantities , , and , depend on the parameter vector .
In (14) it has been assumed that the initial values are generated by the stationary distribution. If this assumption seems inappropriate one can condition on initial values and drop the first term on the right hand side of (14). In what follows we assume that estimation is based on this conditional log-likelihood, namely which we, for convenience, have also scaled with the sample size. Maximizing with respect to yields the maximum likelihood estimator denoted by .
The permissible parameter space of , denoted by , needs to be constrained in various ways. The stationarity conditions , the positivity of the variances , and the conditions ensuring existence of second moments are all assumed to hold (for ). Throughout we assume that the number of mixture components is known, and this also entails the requirement that the parameters () are strictly positive (and strictly less than unity whenever ). Further restrictions are required to ensure identification. Denoting the true parameter value by and assuming stationary initial values, the condition needed is that almost surely only if . An additional assumption needed for this is
From a practical point of view this assumption is not restrictive because what it essentially requires is that the component models cannot be ‘relabeled’ and the same StMAR model obtained. We summarize the restrictions imposed on the parameter space as follows.
The true parameter value is an interior point of , where is a compact subset of .
Asymptotic properties of the maximum likelihood estimator can now be established under conventional high-level conditions. Denote and .
Suppose is generated by the stationary and ergodic StMAR process of Theorem 2 and that Assumption 1 holds. Then is strongly consistent, i.e., almost surely. Suppose further that (i) with finite and positive definite, (ii) , and (iii) for some , a compact convex set contained in the interior of that has as an interior point. Then .
Of the conditions in this theorem, (i) states that a central limit theorem holds for the score vector (evaluated at) and that the information matrix is positive definite, (ii) is the information matrix equality, and (iii) ensures the uniform convergence of the Hessian matrix (in some neighbourhood of ). These conditions are standard but their verification may be tedious.
Theorem 3 shows that the conventional limiting distribution applies to the maximum likelihood estimator which implies the applicability of standard likelihood-based tests. It is worth noting, however, that here a correct specification of the number of autoregressive components is required. In particular, if the number of component models is chosen too large then some parameters of the model are not identified and, consequently, the result of Theorem 3 and the validity of the related tests break down. This particularly happens when one tests for the number of component models. Such tests for mixture autoregressive models with Gaussian conditional densities (see (8)) are developed by Meitz & Saikkonen (2017). The testing problem is highly nonstandard and extending their results to the present case is beyond the scope of this paper.
Instead of formal tests, in our empirical application we use information criteria to infer which model fits the data best. Similar approaches have also been used by Wong et al. (2009) and others. Note that once the number of regimes is (correctly) chosen, standard likelihood-based inference can be used to choose regime-wise autoregressive orders and to test other hypotheses of interest.
5 Empirical example
Modeling and forecasting financial market volatility is key to manage risk. In this application we use the realized kernel of Barndorff-Nielsen et al. (2008) as a proxy for latent volatility. We obtained daily realized kernel data over the period 3 January 2000 through 20 May 2016 for the S&P 500 index from the Oxford-Man Institute’s Realized Library v0.2 (Heber et al., 2009). Figure 1 shows the in-sample period (Jan 3, 2000–June 3, 2014; 3597 observations) for the S&P 500 realized kernel data (
), which is nonnegative with a distribution exhibiting substantial skewness and excess kurtosis (sample skewness 14.3, sample kurtosis 380.8). We follow the related literature which frequently use logarithmic realized kernel (), to avoid imposing additional parameter constraints, and to obtain a more symmetric distribution, often taken to be approximately Gaussian. The data, also shown in Figure 1, has a sample skewness of 0.5 and kurtosis of 3.5. Visual inspection of the time series plots of the and data suggests that the two series exhibit changes at least in levels and potentially also in variability. A kernel estimate of the density function of the series also suggest the potential presence of multiple regimes.
Table 1 reports estimation results for three selected StMAR models (for further details, see the Supplementary Material). Following Wong & Li (2001a), Wong et al. (2009), and Li et al. (2015), we use information criteria for model comparison. For the data in-sample period the Akaike information criterion (aic) favours the StMAR(,) model, the Hannan-Quinn information criterion (hqc) the StMAR(,) model, and the Bayesian information criterion (bic) the simpler StMAR(,
) model. In view of the approximate standard errors in Table1, the estimation accuracy appears quite reasonable except for the degrees of freedom parameters. Taking the sum of the autoregressive parameters as a measure of persistence, we find that the estimated persistence for the first regime of the StMAR(,) is 0.909 and 0.489 for the second regime, suggesting that persistence is rather strong in the first regime and moderate in the second regime.
Numerous alternative models for volatility proxies have been proposed. We employ Corsi’s (2009) heterogeneous autoregressive (HAR) model as it is arguably the most popular reference model for forecasting proxies such as the realized kernel. We also consider a th-order autoregression as the AR(
) often performs well in volatility proxy forecasting. The StMAR models are estimated using maximum likelihood, and the reference AR and HAR models by ordinary least squares. We use a fixed scheme, where the parameters of our volatility models are estimated just once using data from Jan 3, 2000–June 3, 2014. These estimates are then used to generate all forecasts. The remaining 496 observations of our sample are used to compare the forecasts from the alternative models. As discussed inKalliovirta et al. (2016), computing multi-step-ahead forecasts for mixture models like the StMAR is rather complicated. For this reason we use computer driven forecasts to predict future volatility: For each out-of-sample date , and for each alternative model, we simulate 500,000 sample paths. Each path is of length 22 (representing one trading month) and conditional on the information available at date . In these simulations unknown parameters are replaced by their estimates. As the simulated paths are for , and our object of interest is , an exponential transformation is applied.
We examine daily, weekly (5 day), biweekly (10 day), and monthly (22 day) volatility forecasts generated by the alternative models; for instance, the weekly volatility forecast at date is the forecast for (the 5-day-ahead cumulative realized kernel). Table 2 reports the percentage shares of (1, 5, 10, and 22-day) cumulative out-of-sample observations that belong to the 99%, 95%, and 90% one-sided upper prediction intervals based on the distribution of the simulated sample paths; these upper prediction intervals for volatility are related to higher levels of risk in financial markets. Overall, it is seen that the empirical coverage rates of the StMAR based prediction intervals are closer to the nominal levels than the ones obtained with the reference models. By comparison, the accuracy of the prediction intervals obtained with the popular HAR model quickly degrade as the forecast period increases. The StMAR model performs well also when two-sided prediction intervals and point forecast accuracy are considered (for details, see the Supplementary Material).
The authors thank the Academy of Finland for financial support.
The supplementary material includes proofs of Theorems 1–3, information on the numerical optimization methods employed for maximum likelihood estimation, simulation experiments, and further details of the empirical example.
Properties of the multivariate Student’s –distribution
The standard form of the density function of the multivariate Student’s –distribution with degrees of freedom and dimension is (see, e.g., Kotz & Nadarajah (2004, p. 1))
where is the gamma function and and (), a symmetric positive definite matrix, are parameters. For a random vector possessing this density, the mean and covariance are and (assuming ). The density can be expressed in terms of and as
This form of the density function, denoted by , is used in this paper, and the notation is used for a random vector possessing this density. Condition and positive definiteness of will be tacitly assumed.
For marginal and conditional distributions, partition as where the components have dimensions and (). Conformably partition and as and
Then the marginal distributions of and are and , respectively. The conditional distribution of given is also a –distribution, namely (see Ding (2016, Sec. 2))
where and . Furthermore, .
Now consider a special case: a ()–dimensional random vector , where and is a symmetric positive definite Toeplitz matrix. Note that the mean vector and the covariance matrix have structures similar to those of the mean and covariance matrix of a ()–dimensional realization of a second order stationary process. More specifically, assume that is the covariance matrix of a second order stationary AR() process.
Partition as with and real valued and and both vectors. The marginal distributions of and are and , where the (symmetric positive definite Toeplitz) matrix is obtained from by deleting the first row and first column or, equivalently, the last row and last column (here the specific structures of and are used). The conditional distribution of given is
where expressions for and can be obtained from above as follows. Partition as
and denote and ( as is positive definite). From above,
- Barndorff-Nielsen et al. (2008) Barndorff-Nielsen, O. E., Hansen, P. R., Lunde, A. & Shephard, N. (2008). Designing realized kernels to measure the ex post variation of equity prices in the presence of noise. Econometrica 76, 1481–1536.
- Corsi (2009) Corsi, F. (2009). A simple approximate long-memory model of realized volatility. J. Finan. Economet. 7, 174–196.
- Ding (2016) Ding, P. (2016). On the conditional distribution of the multivariate distribution. Am. Statistician 70, 293–295.
- Dueker et al. (2007) Dueker, M. J., Sola, M. & Spagnolo, F. (2007). Contemporaneous threshold autoregressive models: estimation, testing and forecasting. J. Economet. 141, 517–547.
- Frühwirth-Schnatter (2006) Frühwirth-Schnatter, S. (2006). Finite Mixture and Markov Switching Models. Springer.
- Glasbey (2001) Glasbey, C. A. (2001). Non-linear autoregressive time series with multivariate Gaussian mixtures as marginal distributions. J. R. Statist. Soc. C 50, 143–154.
- Heber et al. (2009) Heber, G., Lunde, A., Shephard, N. & Sheppard, K. (2009). Oxford-man institute’s realized library v0.2. Oxford-Man Institute, University of Oxford.
Heracleous & Spanos (2006)
Heracleous, M. S. & Spanos, A. (2006).
dynamic linear regression: re-examining volatility modeling.In Econometric Analysis of Financial and Economic Time Series (Advaces in Econometrics, Vol 20 Part 1), D. Terrell & T. B. Fomby, eds. Emerald Group Publishing Limited, pp. 289–319.
- Kalliovirta et al. (2015) Kalliovirta, L., Meitz, M. & Saikkonen, P. (2015). A Gaussian mixture autoregressive model for univariate time series. J. Time Ser. Anal. 36, 247–266.
- Kalliovirta et al. (2016) Kalliovirta, L., Meitz, M. & Saikkonen, P. (2016). Gaussian mixture vector autoregression. J. Economet. 192, 485–498.
- Kotz & Nadarajah (2004) Kotz, S. & Nadarajah, S. (2004). Multivariate distributions and their applications. Cambridge: Cambridge University Press.
- Lanne & Saikkonen (2003) Lanne, M. & Saikkonen, P. (2003). Modeling the US short-term interest rate by mixture autoregressive processes. J. Finan. Economet. 1, 96–125.
Le et al. (1996)
Le, N. D., Martin, R. D. & Raftery, A. E. (1996).
Modeling flat stretches, bursts, and outliers in time series using mixture transition distribution models.J. Am. Statist. Assoc. 91, 1504–1515.
- Li et al. (2015) Li, G., Guan, B., Li, W. K. & Yu, P. L. (2015). Hysteretic autoregressive time series models. Biometrika 102, 717–723.
- McLachlan & Peel (2000) McLachlan, G. & Peel, D. (2000). Finite Mixture Models. Wiley.
- Meitz & Saikkonen (2017) Meitz, M. & Saikkonen, P. (2017). Testing for observation-dependent regime switching in mixture autoregressive models. HECER Discussion Paper No. 420, University of Helsinki, arXiv:1711.03959.
- Spanos (1994) Spanos, A. (1994). On modeling heteroskedasticity: the Student’s and elliptical linear regression models. Economet. Theory 10, 286–315.
- Tong (2011) Tong, H. (2011). Threshold models in time series analysis – 30 years on. Statistics and Its Interface 4, 107–118.
- Wong et al. (2009) Wong, C. S., Chan, W. S. & Kam, P. L. (2009). A student -mixture autoregressive model with applications to heavy-tailed financial data. Biometrika 96, 751–760.
- Wong & Li (2000) Wong, C. S. & Li, W. K. (2000). On a mixture autoregressive model. J. R. Statist. Soc. B 62, 95–115.
- Wong & Li (2001a) Wong, C. S. & Li, W. K. (2001a). On a logistic mixture autoregressive model. Biometrika 88, 833–846.
Wong & Li (2001b)
Wong, C. S. & Li, W. K. (2001b).
On a mixture autoregressive conditional heteroscedastic model.J. Am. Statist. Assoc. 96, 982–995.
Supplementary material for
“A mixture autoregressive model based on Student’s –distribution”
by Meitz, Preve, and Saikkonen
This Supplementary Material includes proofs of Theorems 1–3, information on the numerical optimization methods employed for maximum likelihood estimation, simulation experiments, and further details of the empirical example.
Proof of Theorem 1.
Corresponding to ,