DeepAI
Log In Sign Up

Semiparametric model averaging for high dimensional conditional quantile prediction

09/05/2018
by   Jingwen Tu, et al.
0

In this article, we propose a penalized high dimensional semiparametric model average quantile prediction approach that is robust for forecasting the conditional quantile of the response. We consider a two-step estimation procedure. In the first step, we use a local linear regression approach to estimate the individual marginal quantile functions, and approximate the conditional quantile of the response by an affine combination of one-dimensional marginal quantile regression functions. In the second step, based on the nonparametric kernel estimates of the marginal quantile regression functions, we utilize a penalized method to estimate the suitable model weights vector involved in the approximation. The objective of the second step is to select significant variables whose marginal quantile functions make a significant contribution to estimating the joint multivariate conditional quantile function. Under some mild conditions, we have established the asymptotic properties of the proposed robust estimator. Finally, simulations and a real data analysis have been used to illustrate the proposed method.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

02/09/2021

Nonparametric C- and D-vine based quantile regression

Quantile regression is a field with steadily growing importance in stati...
07/20/2018

Wild Residual Bootstrap Inference for Penalized Quantile Regression with Heteroscedastic Errors

We consider a heteroscedastic regression model in which some of the regr...
05/30/2020

Parametric Modeling of Quantile Regression Coefficient Functions with Longitudinal Data

In ordinary quantile regression, quantiles of different order are estima...
07/03/2019

Mid-quantile regression for discrete responses

We develop quantile regression methods for discrete responses by extendi...
07/23/2018

Prediction based on conditional distributions of vine copulas

Vine copula models are a flexible tool in multivariate non-Gaussian dist...
04/21/2021

Modeling sign concordance of quantile regression residuals with multiple outcomes

Quantile regression permits describing how quantiles of a scalar respons...
02/26/2019

Penalized Sieve GEL for Weighted Average Derivatives of Nonparametric Quantile IV Regressions

This paper considers estimation and inference for a weighted average der...

1 Introduction

In many practical situations, especially for economic and medical fields, forecasting and predictive inference are our main goals. In practice, we often face a large number of predictors and uncertain functional forms when making statistical prediction. A popular approach to solve this problem is to consider the model selection tool that can select a optimal model from all candidate models, but we have to recognize that model selection technique yields only one final model, so useful information may be ignored when significant variables absent from the final model. This may result in misleading predictive outcomes. Instead of depending on only one best model, an alternative method, called model averaging technique, aims to improve the prediction accuracy through giving higher weights to the better marginal models. Thus, model averaging can be regarded as a smoothed extension of model selection and generally leads to a lower risk than model selection. Earlier development for model average was linked closely the Bayesian statistics including Hoeting et al. (1999), Raftery et al. (1997) and Hjort and Claeskens (2003). Recently, various strategies have been developed to construct optimal model averaging weights for frequentist models. For example, Hansen (2007) proposed a frequentist model average approach with weights selected by minimizing a Mallows criterion. Wan et al. (2010) focused on two assumptions of Hansen (2007) and provided a stronger theoretical basis for the use of the Mallows criterion in model averaging. Liang et al. (2011) considered a new procedure of weight choice by minimizing frequentist model average estimators’ mean squared errors. To deal with heteroscedastic data, Hansen and Racine (2012) developed a jackknife model averaging approach to choose weights by minimizing a leave-one-out cross-validation criterion and had proved that the proposed approach achieved the lowest possible asymptotic squared error. Zhang et al. (2013) further extended the method of Hansen and Racine (2012) to general models with a non-diagonal error covariance structure or lagged dependent variables. In the framework of linear mixed-effects models, Zhang et al. (2014) constructed an unbiased estimator of the squared risk for the model averaging, which has been demonstrated to be asymptotically optimal in theory under some regularity conditions. Zhang et al. (2016) studied optimal model averaging methods for generalized linear models and generalized linear mixed-effects models, which can be taken as an extension of Zhang et al. (2014)’s. Under the local asymptotic framework, Liu et al. (2015) studied the limiting distributions of least squares averaging estimators and proposed a plug-in averaging estimator by minimizing the sample asymptotic mean squared error. Other related literature can refer to Hansen (2008), Claeskens and Hjort (2008), Zhang et al. (2012), Cheng and Hansen (2015).

Almost all mentioned above research work focus on averaging a set of parametric models by assuming some parametrically linear or nonlinear relationships between the response and predictors. Although parametric models are easy to understand and widely accepted by scientific researchers, they make strong assumptions in practical applications, which may increase the risk of bias prediction. In contrast, nonparametric models with less structural restriction may provide more flexible predictive inference. Recently, Li et al. (2015) firstly proposed a nonparametric model averaging approach which is more flexible than traditional parametric averaging method. They estimated the multivariate conditional mean regression function by averaging a set of estimated marginal mean regression functions with proper weights obtained by minimizing least squares loss. Motivated by the nonparametric model averaging technique, Chen et al. (2016) studied the semiparametric dynamic portfolio choice and utilized a novel data-driven method to estimate the nonparametric optimal portfolio choice. Huang and Li (2018) extended the method of Li et al. (2015) to panel data and established the asymptotic results of the proposed procedure. Li et al. (2018) approximated the conditional mean regression function by a weighted average of varying coefficient regression functions, which can handle discrete and continuous predictors.

In recent years, we often encounter datasets with a very large number of potential predictors, but only a minority of predictors are truly relevant in prediction. However, most of literature focus on the determination of weights for individual models under a fixed number of covariates. So far, Ando and Li (2014) proposed a two-step model averaging procedure to predict the conditional mean of the response for a ultra-high dimensional linear regression. In order to obtain more accurate prediction of the conditional mean of the response for ultra-high dimensional time series, Chen et al. (2018) introduced a two-step semiparametric procedure that includes the kernel sure independence screening technique and the semiparametric penalized method of model averaging marginal regression. All mentioned above references aim to forecast the conditional mean of the response, but sometimes we are more interested in predicting the conditional quantile of the response. Compared to mean regression, quantile regression not only provides a more complete description of the entire response distribution but also does not require specification of the error distribution, and thus it is more robust.

In this paper, we aims to develop a new semiparametric model averaging procedure for achieving more accurate prediction for the true conditional quantile of the response under the high dimensional setting. This paper may have several innovation as follows: (1) our objective is to predict the conditional quantile of the response rather than its conditional mean. Thus we may encounter more challenge to establish asymptotic theories of model weights since we cannot obtain the closed-form expression of model weights; (2) the proposed approach can offer a complete prediction for the response when different quantiles are adopted; (3) our method produces more accurate in-sample and out-of-sample prediction when non-normal error are considered.

The rest of the paper is organized as follows. In Section 2, we first give the approximation of the conditional quantile function of the response. Then a two-step semiparametric model averaging approach is applied to estimate the conditional quantile function of the response. In Section 3, we establish the asymptotic theory for the proposed estimator. In Section 4, numerical studies including simulation studies and a real data analysis are carried out to investigate the finite sample performance of the proposed method. Some discussions are reported in Section 5. Finally, all technical proofs are given in the Appendix.

2 Model approximation and estimation method

Let be independent and identically distributed observations from , where is a -vector of predictors and

is the response variable. The goal of this paper is to develop new procedure for forecasting the

th conditional quantile function of given , namely, . If the dimension of is high, it is not practical to model conditional quantile function

without any structure assumption due to the curse of dimensionality. Recently, authors approximated the quantile function

by semiparametric models such as quantile additive models (Horowitz and Lee, 2005, Lv et al., 2017), quantile varying coefficient models (Tang et al., 2013) and among others. However, using a specified model with fixed model structure may increase the risk of model misspecification, which results in poor predictive performance. Therefore, we adopt the model averaging technique to predict . Specifically, motivated by Li et al. (2015), we model or approximate by an affine combination of one-dimensional nonparametric functions , where is the th conditional quantile of given . Here, each marginal regression can be regarded as a candidate model and is the corresponding model weight coefficient. In the rest of the article, we omit and from and for notational simplicity, but it is helpful to bear in mind that these quantities are and -specific.

What we are most interested in is to accurately estimate and the model average weight vector . We consider a two-step estimation procedure. In the first step, we employ local linear regression technique to estimate the individual marginal regression functions . Specifically, considering a Taylor expansion, we have

where is the first-order derivative of . Let

be check loss function at

quantile. Then, we estimate by minimizing the following local weighted quantile loss

(1)

where is a kernel function and is a bandwidth. Let be the minimizer of the objective function (1). Then, we have .

In the second step, let be the optimal values of the weights in the model averaging defined in Li et al. (2015). To estimate , we minimize the following function with respect to with ,

(2)

where is a penalty function with a tuning parameter , such as SCAD penalty function, is its first order derivative, defined by

where , and is a nonnegative penalty parameter which governs sparsity of the model. It is easy to find that is close to zero if is large.

The estimator of the optimal weights can be obtained through minimizing the objective function (2), that is, . This paper uses the R package “rqPen” to obtain the estimator . Finally, for a future observation , we can predict by .

3 The theoretical results

Define and , where is the second-order derivative of , , , and . We assume and . To prove the theoretical results of the proposed estimators, we next present the following technical conditions.

(C1) Let be the marginal density function of the covariates , the -th element of . Assume that has continuous derivatives up to the second order and

where is the compact support of . For each , the conditional density functions of for given exists and satisfies the Lipschitz continuous condition. Furthermore, the length of is uniformly bounded by a positive constant.

(C2) The kernel function

is a Lipschitz continuous, symmetric and bounded probability density function with a compact support.

(C3) The marginal regression function has continuous derivatives up to the second order and there exists a positive constant such that

(C4) Let and be the marginal density and distribution functions of , and be the density and distribution functions of . The density functions and are bounded and bounded away from zero in a neighborhood of zero.

(C5) There exists a sequence of fixed vectors in , with bounded, such that , where denotes the norm for any vector.

(C6) The matrix

is positive definite with the eigenvalues bounded away from zero and infinity. In particular, the smallest eigenvalue of

is larger than , a small positive constant.

(C7) , and for all , where and are the smallest and largest eigenvalues of .

(C8) .

(C9) Let and , and there exist two positive constants and such that when .

Without loss of generality, we define the vector of the optimal weights

where stands for non-zero weights with dimension and is zero weights with dimension . Let and be the estimators of and respectively.

Define , , is first vector of , and are the top-left submatrix of and , and . Let with , with and . Obviously, the mean of is zero, and we define . Let , and , where is the sign function. Define and . In the following theorems, we give the asymptotic theories of and .

Theorem 1. Suppose that is an interior point of the support of . Under the regularity conditions (C1)–(C4), if and for

, then the asymptotic conditional bias and variance of the local linear estimator

are given by

Furthermore, conditioning on , we have

for , where stands for convergence in distribution.

Remark 1. Theorem 1 shows that the proposed nonparametric estimate is

consistent and enjoys a asymptotically normal distribution.

Theorem 2. Under conditions (C1)–(C9), together with for and , if , then such that , we have

(i) there exists a local minimizer of the objective function defined in (2) such that ;

(ii) with probability approaching one;

(iii) .

Remark 2. Theorem 2 indicates that the estimate of the optimal weight is still consistent although the dimension of predictor goes to infinite. Meanwhile, it also shows that the proposed estimate enjoys well-known properties in high dimensional variable selection such as the sparsity and oracle property.

4 Numerical studies

We investigate the performance of the proposed approach by three simulation examples and an empirical application. In our numerical studies, we set the kernel function as the Epanechnikov kernel, namely, . Bandwidth selection is crucial in local smoothing since it governs the curvature of the fitted function. Similar to Kai et al. (2011), we use the following formula to choose the bandwidth , where is the selected optimal bandwidth for least squares, and and represent the density function and distribution function of standard normal distribution, respectively. The rule of thumb is used to select the bandwidth . In addition, the tuning parameter in the proposed penalized procedure plays an important role. Lian (2012) had proved that the Schwarz information criterion (SIC) is a consistent variable selection criterion under the framework of fixed dimension. In this paper, we select by minimizing the following modified SIC criterion (MSIC)

(3)

where is the estimated model weight vector for a given , is the number of nonzero coefficients in and diverges with . For example, the MSIC criterion reduces to tradition SIC criterion Lian (2012) when , and the MSIC criterion is more suitable for high dimensional data if is selected as .

In order to investigate the superiority of the proposed method, we consider the following methods: (1) the proposed semiparametric model average quantile prediction (without SCAD penalty, denoted as SMAQP), (2)the proposed penalized semiparametric model average quantile prediction (with SCAD penalty, denoted as PSMAQP), (3) semiparametric model average mean prediction proposed by Li et al. (2015) (without SCAD penalty, denoted as SMAMP), (4) penalized semiparametric model average mean prediction proposed by Chen et al. (2018) (with SCAD penalty, denoted as PSMAMP). SMAMP and PSMAMP aim to forecast the conditional mean function , and detailed descriptions about the two methods can refer to the section 3 of Li et al. (2015) and subsection 2.1 of Chen et al. (2018). The tuning parameter involved in PSMAMP is chosen by the cross-validation according to the advice of Chen et al. (2018), and the R package “ncvreg” can be used to obtain the penalized estimator PSMAMP.

4.1 Simulation studies

In all simulation examples, the sample size consists of a training set of size and a testing set of size , namely, .

Example 1. For a clear comparison, we adopt similar settings used in Chen et al. (2018) and generate the random samples from the following model

(4)

where , , and . We fix and consider and for example 1. The covariates are independently drawn from , and we set the dimension of covariates as which satisfies the theoretical condition , where stands for the largest integer not greater than . Obviously, the first four variables make a significant contribution to estimating the joint multivariate quantile function , while the rest are not. Therefore, we have reasons to believe that the first four model weights are nonzero and the rest are zero. Please note that the model average component given in section 2 is different from reported in model (4) for . Our mission is to achieve the goal of accurately predicting the conditional quantile function , so we are not attempting to estimate in this paper.

In order to examine the robustness of the proposed procedure, we consider the following three different error distributions of : standard normal distribution (SN),

-distribution with 3 degrees of freedom (

), contaminated normal distribution () representing a mixture of and with weights and respectively. In addition, four criteria are adopted to evaluate the performance of proposed approach. Firstly, “C”, “IC” and “CF” are considered to examine variable selection performance, where “C” represents the average number of zero coefficients in the model weight vector that are correctly estimated to be zero; “IC” represents the average number of nonzero coefficients in the model weight vector that are incorrectly estimated to be zero and “CF” represents the proportion of correctly fitted models (“correctly fit” means that the estimation procedure correctly chooses all significant components from the model weight vector). Secondly, the mean prediction error (MPE) is used to measure accuracy of prediction, which is defined as , where stands for an index set of either the training sample or the testing sample.

Error method C IC CF In-Sample Error Out-of-Sample Error
200 N(0,1) SMAMP 0.459 (0.031) 0.577 (0.058)
PSMAMP 7.362 0.018 0.258 0.467 (0.032) 0.566 (0.058)
SMAQP 0.462 (0.036) 0.580 (0.062)
PSMAQP 9.930 0.108 0.836 0.487 (0.038) 0.557 (0.061)
SMAMP 0.601 (0.065) 0.748 (0.085)
PSMAMP 6.118 0.030 0.096 0.605 (0.063) 0.730 (0.085)
SMAQP 0.592 (0.047) 0.734 (0.081)
PSMAQP 9.956 0.206 0.760 0.625 (0.051) 0.705 (0.077)
MN SMAMP 0.702 (0.117) 0.869 (0.141)
PSMAMP 4.372 0.048 0.030 0.697 (0.113) 0.846 (0.145)
SMAQP 0.637 (0.084) 0.765 (0.122)
PSMAQP 9.966 0.222 0.754 0.668 (0.087) 0.738 (0.121)
400 N(0,1) SMAMP 0.445 (0.021) 0.524 (0.046)
PSMAMP 13.63 0.006 0.414 0.453 (0.022) 0.513 (0.045)
SMAQP 0.439 (0.023) 0.512 (0.045)
PSMAQP 15.98 0.006 0.976 0.457 (0.024) 0.494 (0.043)
SMAMP 0.597 (0.052) 0.694 (0.081)
PSMAMP 11.38 0.002 0.162 0.599 (0.042) 0.673 (0.078)
SMAQP 0.580 (0.036) 0.664 (0.072)
PSMAQP 15.99 0.012 0.978 0.603 (0.037) 0.640 (0.072)
MN SMAMP 0.676 (0.078) 0.789 (0.116)
PSMAMP 8.346 0.006 0.040 0.667 (0.075) 0.761 (0.120)
SMAQP 0.616 (0.058) 0.693 (0.106)
PSMAQP 15.99 0.012 0.984 0.636 (0.058) 0.674 (0.105)

Notation: To make this a fair comparison, we consider for SMAQP and PSMAQP in this table. In addition, the number of zero components of model weight vector is 10 for and 16 for .

Table 1:

Simulation results of C, IC, CF, MPE and their standard deviations (in parenthesis) for

in example 1.

Example 2. In this example, similar to Huang and Li (2018), we generate the random samples from the following model

(5)

where , , and . The covariates are simulated by for and , where and are independently drawn from and . We also fix and consider and for example 2. Other settings are the same as that in example 1.

It is easy to find that the conditional mean function is equal to the conditional quantile function for . Thus, we can compare mean prediction approaches (SMAMP and PSMAMP) with quantile prediction approaches (SMAQP and PSMAQP) at . The MPE criterion is reduced to for , and thus this criterion also can be used to assess the prediction performance of mean prediction approaches. The corresponding results of mean prediction approaches (SMAMP and PSMAMP) and quantile prediction approaches (SMAQP and PSMAQP) at are reported in Tables 1 and 3. We can obtain the following findings. Firstly, the values in the column labeled “C” gradually tend to the true number of zero components with the training sample size increasing. The CF values are very close to one for a large training sample size (e.g. ), which shows that the proposed penalized procedure can consistently select significant components in weight vector. However, the existing mean prediction approach PSMAMP performs badly due to lower CF values. Secondly, unpenalized methods always has smaller in-sample MPE than the penalized methods’s, but it does not hold true for out-of-sample MPE. For heavy-tailed distributions and contaminated distribution MN, it is not hard to find that our proposed penalized method PSMAQP is best in terms of prediction accuracy among all methods. Meanwhile, there is little difference for PSMAMP and PSMAQP under the normal error distribution. Thirdly, Tables 2 and 4 give the simulation results of SMAQP and PSMAQP at . The results also show that PSMAQP has better prediction performance.

Error method C IC CF In-Sample Error Out-of-Sample Error
200 N(0,1) SMAQP 0.917 (0.099) 1.032 (0.163)
PSMAQP 9.896 0.562 0.468 0.961 (0.108) 1.027 (0.164)
SMAQP 1.087 (0.110) 1.239 (0.170)
PSMAQP 9.866 0.768 0.344 1.139 (0.130) 1.229 (0.177)
MN SMAQP 1.108 (0.128) 1.256 (0.213)
PSMAQP 9.918 0.848 0.338 1.168 (0.148) 1.253 (0.215)
400 N(0,1) SMAQP 0.850 (0.075) 0.921 (0.119)
PSMAQP 15.97 0.134 0.844 0.875 (0.077) 0.909 (0.121)
SMAQP 1.035 (0.085) 1.129 (0.148)
PSMAQP 15.95 0.224 0.740 1.066 (0.088) 1.110 (0.147)
MN SMAQP 1.054 (0.096) 1.147 (0.181)
PSMAQP 15.97 0.208 0.776 1.081 (0.100) 1.127 (0.178)

Notation: The number of zero components of model weight vector is 10 for and 16 for .

Table 2: Simulation results of C, IC, CF, MPE and their standard deviations (in parenthesis) for in example 1.
Error method C IC CF In-Sample Error Out-of-Sample Error
400 N(0,1) SMAMP 0.507 (0.021) 0.566 (0.045)
PSMAMP 11.75 0.000 0.212 0.515 (0.022) 0.555 (0.045)
SMAQP 0.501 (0.020) 0.594 (0.047)
PSMAQP 15.97 0.006 0.966 0.530 (0.022) 0.566 (0.045)
SMAMP 0.724 (0.054) 0.810 (0.105)
PSMAMP 7.750 0.006 0.026 0.725 (0.052) 0.790 (0.107)
SMAQP 0.692 (0.042) 0.802 (0.104)
PSMAQP 15.98 0.048 0.936 0.728 (0.044) 0.769 (0.103)
MN SMAMP 0.837 (0.105) 0.932 (0.154)
PSMAMP 5.740 0.012 0.000 0.826 (0.104) 0.908 (0.156)
SMAQP 0.740 (0.072) 0.839 (0.151)
PSMAQP 15.99 0.044 0.946 0.772 (0.073) 0.808 (0.151)
800 N(0,1) SMAMP 0.511 (0.014) 0.553 (0.044)
PSMAMP 18.61 0 0.276 0.519 (0.015) 0.544 (0.043)
SMAQP 0.508 (0.014) 0.567 (0.045)
PSMAQP 23.99 0 0.988 0.528 (0.014) 0.546 (0.043)
SMAMP 0.728 (0.038) 0.795 (0.089)
PSMAMP 13.89 0 0.036 0.727 (0.035) 0.774 (0.090)
SMAQP 0.704 (0.030) 0.784 (0.088)
PSMAQP 23.99 0 0.998 0.730 (0.031) 0.758 (0.087)
MN SMAMP 0.817 (0.066) 0.884 (0.140)
PSMAMP 9.662 0 0.008 0.803 (0.067) 0.856 (0.146)
SMAQP 0.744 (0.048) 0.810 (0.137)
PSMAQP 24.00 0 1.000 0.767 (0.048) 0.787 (0.136)

Notation: To make this a fair comparison, we consider for RSMAP and PRSMAP in this table. In addition, the number of zero components of model weight vector is 16 for and 24 for .

Table 3: Simulation results of C, IC, CF, MPE and their standard deviations (in parenthesis) for and in example 2.
Error method C IC CF In-Sample Error Out-of-Sample Error
400 N(0,1) SMAQP 0.848 (0.042) 0.938 (0.095)
PSMAQP 15.92 0.010 0.912 0.882 (0.043) 0.922 (0.094)
SMAQP 1.111 (0.070) 1.220 (0.139)
PSMAQP 15.91 0.110 0.828 1.147 (0.073) 1.188 (0.133)
MN SMAQP 1.121 (0.092) 1.204 (0.195)
PSMAQP 15.93 0.094 0.852 1.160 (0.094) 1.184 (0.193)
800 N(0,1) SMAQP 0.853 (0.029) 0.912 (0.083)
PSMAQP 23.97 0.000 0.968 0.879 (0.030) 0.901 (0.081)
SMAQP 1.114 (0.049) 1.179 (0.150)
PSMAQP 23.97 0.002 0.968 1.136 (0.048) 1.151 (0.143)
MN SMAQP 1.116 (0.065) 1.165 (0.179)
PSMAQP 23.98 0.000 0.984 1.142 (0.065) 1.146 (0.174)

Notation: The number of zero components of model weight vector is 16 for and 24 for .

Table 4: Simulation results of C, IC, CF, MPE and their standard deviations (in parenthesis) for and in example 2.
method C IC CF In-Sample Error Out-of-Sample Error
400 0.5 SMAQP 0.328 (0.038) 0.341 (0.047)
PSMAQP 14.96 0.350 0.652 0.258 (0.049) 0.266 (0.056)
0.75 SMAQP 0.383 (0.047) 0.401 (0.060)
PSMAQP 14.90 0.236 0.702 0.301 (0.058) 0.310 (0.066)
800 0.5 SMAQP 0.266 (0.027) 0.273 (0.036)
PSMAQP 22.99 0.014 0.974 0.177 (0.029) 0.180 (0.033)
0.75 SMAQP 0.316 (0.033) 0.325 (0.046)
PSMAQP 22.95 0.022 0.932 0.217 (0.037) 0.221 (0.042)

Notation: The number of zero components of model weight vector is 15 for and 23 for .

Table 5: Simulation results of C, IC, CF, MEE and their standard deviations (in parenthesis) in example 3.

Example 3. The conditional quantile function is considered as

(6)

where is the standard normal distribution function, are independently drawn from and can be regarded as the intercept. We fix and consider and for example 3 and . Obviously, the fifth covariate’s coefficient varies with , and only the first five predictors are significant for predicting . The first two examples come from the nonparametric additive model, but the proposed approach do not need any model assumption. Thus we aim to confirm that our method is model free in this example. To assess the estimation accuracy of , we consider the mean estimation error (MEE) defined as in this example. Table 5 lists the simulation results which show that the proposed PSMAQP performs well for different quantiles.

Overall, the proposed model free procedure PSMAQP is competitive when compared with the existing methods, and its finite sample performances are satisfactory.

4.2 An application

In this section, we apply our proposed method to analyze the body fat dataset (Johnson, 1996), which is available from http://lib.stat.cmu.edu/datasets/bodyfat. This dataset consists of 252 observations without missing values. The purpose of studying this dataset is to predict the percentage of body fat according to various body circumference measurements. Thus, the percentage of body fat is taken as the response variable and other body circumference measurements are regarded as the predictors. Brief descriptions and marginal Pearson correlations of 14 variables are summarized in Table 6. More details can refer to Johnson (1996). Before employing prediction methods, we take the logarithm transformation for all predictors.

To evaluate the predictive performance of various methods, the data is split into two parts. One part including observations is used as a training data set to estimate the weight vector and the marginal quantile functions , and the other part including observations is considered as a testing data set to evaluate the predictive ability of various methods. In this real data analysis, we consider and , and .

Table 7 reports the in-sample and out-of-sample mean prediction errors (MPE) and the corresponding sample standard deviations (SD) over 500 random partitions. Firstly, for and in-sample performance, it is easy to see that SMAQP performs best among four approaches. For out-of-sample performance, one can see clearly that the proposed penalized approach PSMAQP has smallest MPE and SD for different settings, which shows that our proposed method has better predictive ability. Secondly, for and , we can see that PSMAQP always performs better than SMAQP in terms of out-of-sample performance.

To investigate the estimated weights, we list the estimated weights at and their standard deviations (in brackets) calculated by the bootstrap resampling method (Horowitz, 1998). Obviously, the weights for the penalized prediction methods (PSMAMP and PSMAQP) are relatively sparse with much smaller standard deviations than the unpenalized prediction methods (SMAMP and SMAQP). Meanwhile, it is not hard to find that PSMAQP is most efficient among all methods due to the smallest standard deviations. In addition, for PSMAMP, only the sixth predictor () is chose as the significant variable whose marginal quantile function has significant influence on estimating . However, PSMAQP selects five predictors (including , , , and ) as the significant variables. In summary, our proposed model averaging procedure generally works well and outperforms other existing methods.

Variable Name Description Correlation with
Age Age (years) 0.2921
Weight Weight(lbs) 0.6287
Height Height (inches) -0.0990
Neck Neck circumference (cm) 0.4905
Chest Chest circumference (cm) 0.7051
Abdomen Abdomen 2 circumference (cm) 0.8218
Hip Hip circumference (cm) 0.6365
Thigh Thigh circumference (cm) 0.5680
Knee Knee circumference (cm) 0.5102
Ankle Ankle circumference (cm) 0.2796
Biceps Biceps (extended) circumference (cm) 0.5000
Forearm Forearm circumference (cm) 0.3584
Wrist Wrist circumference (cm) 0.3447
Body Fat() Percent body fat using Siri’s equation: 495/Density - 450 1.0000
Table 6: Regressors for the body fat dataset.
method
In-Sample Out-of-Sample In-Sample Out-of-Sample
MPE SD MPE SD MPE SD MPE SD
SMAMP 1.622 0.069 2.116 0.691 1.653 0.043 2.242 1.402
PSMAMP 1.763 0.136 2.117 0.544 1.829 0.105 2.314 1.357
SMAQP 1.593 0.067 1.940 0.143 1.634 0.043 1.890 0.174
PSMAQP 1.683 0.081 1.896 0.125 1.696 0.055 1.858 0.166
SMAQP 2.666 0.151 2.955 0.338 2.687 0.098 2.857 0.363
PSMAQP 2.726 0.165 2.914 0.314 2.742 0.089 2.853 0.348
SMAQP 2.768 0.169 3.118 0.383 2.838 0.115 3.079 0.451
PSMAQP 2.811 0.171 3.080 0.340 2.859 0.122 3.056 0.401
Table 7: Prediction results () for analysis of the body fat dataset.
weight SMAMP PSMAMP SMAQP PSMAQP
-0.056 (12.091) -0.044 (15.681) -0.038 (0.584) 0.063 (0.070)
0.660 (0.306) 0.000 (0.391) 0.419 (0.189) 0.314 (0.210)
-0.530 (0.546) 0.000 (0.446) -0.115 (0.546) 0.000 (0.231)
1.114 (3.454) 0.000 (2.229) 0.146 (2.976) 0.000 (0.192)
-0.418 (0.338) 0.000 (0.397) -0.330 (0.216) -0.238 (0.225)
-0.055 (0.210) 0.000 (0.192) 0.005 (0.248) 0.000 (0.128)
1.643 (0.151) 1.234 (0.185) 1.174 (0.185) 1.187 (0.170)
-0.188 (0.291) 0.000 (0.259) -0.164 (0.282) 0.000 (0.190)
0.515 (0.251) 0.000 (0.291) 0.282 (0.178) 0.000 (0.188)
-0.185 (0.336) 0.000 (0.264) -0.195 (0.195) -0.049 (0.167)
0.330 (0.752) 0.000 (0.590) 0.204 (0.312) 0.000 (0.156)
0.292 (0.209) 0.000 (0.214) 0.124 (0.166) 0.000 (0.167)
0.204 (0.298) 0.000 (0.245) 0.343 (0.205) 0.000 (0.158)
-2.095 (1.273) 0.000 (1.519) -0.696 (0.303) -0.561 (0.349)
Table 8: Estimated weights and their standard deviations (in brackets) for the body fat study at .

5 Conclusion

In this paper, we provide a new semiparametric model averaging estimation for forecasting the conditional quantile function under the high-dimensional settings. Based on local linear regression, we firstly estimate the individual marginal regression functions by minimizing the local weighted quantile loss function. Then, a penalized quantile regression is developed to select the regressors whose marginal regression functions make significant contribution in estimating the quantile function . Simulations and empirical example in Section 4 show that the proposed method performs reasonably well in finite samples.

Recently, under the framework of ultra-high dimension setting, Ando and Li (2014) developed a new model averaging approach based on delete-one cross-validation criterion and proved that the proposed method could achieve the lowest possible prediction loss asymptotically. But they only considered high dimensional parametric model averaging, which may increase the risk of model misspecification. Thus, it is interesting to study semiparametric model averaging estimation for ultra-high dimensional data. Research in these aspects is ongoing.

Acknowledgments
This work is supported by the National Social Science Fund of China (Grant No. 17CTJ015).

Appendix

Let denote a positive constant that may be different at different place throughout this paper. Let , , and . Define .

Lemma 1. Denote as the minimizer of (1). Then, under the regularity conditions (C1)–(C4), we have

where and .

Proof of Lemma 1. To apply the identity (Knight, 1998)