# Model Selection in High-Dimensional Misspecified Models

Model selection is indispensable to high-dimensional sparse modeling in selecting the best set of covariates among a sequence of candidate models. Most existing work assumes implicitly that the model is correctly specified or of fixed dimensions. Yet model misspecification and high dimensionality are common in real applications. In this paper, we investigate two classical Kullback-Leibler divergence and Bayesian principles of model selection in the setting of high-dimensional misspecified models. Asymptotic expansions of these principles reveal that the effect of model misspecification is crucial and should be taken into account, leading to the generalized AIC and generalized BIC in high dimensions. With a natural choice of prior probabilities, we suggest the generalized BIC with prior probability which involves a logarithmic factor of the dimensionality in penalizing model complexity. We further establish the consistency of the covariance contrast matrix estimator in a general setting. Our results and new method are supported by numerical studies.

## Authors

• 4 publications
• 95 publications
• 22 publications
03/17/2018

### Large-Scale Model Selection with Misspecification

Model selection is crucial to high-dimensional learning and inference fo...
05/15/2019

### Revisiting High Dimensional Bayesian Model Selection for Gaussian Regression

Model selection for regression problems with an increasing number of cov...
02/22/2021

### Bridging factor and sparse models

Factor and sparse models are two widely used methods to impose a low-dim...
09/06/2021

### Bayesian data selection

Insights into complex, high-dimensional data can be obtained by discover...
06/21/2021

### A generalized EMS algorithm for model selection with incomplete data

Recently, a so-called E-MS algorithm was developed for model selection i...
03/23/2019

This paper investigates the high-dimensional linear regression with high...
10/12/2018

### The good, the bad, and the ugly: Bayesian model selection produces spurious posterior probabilities for phylogenetic trees

The Bayesian method is noted to produce spuriously high posterior probab...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

With rapid advances of modern technology, high-throughput data sets of unprecedented size, such as genetic and proteomic data, fMRI and functional data, and panel data in economics and finance, are frequently encountered in many contemporary applications. In these applications, the dimensionality can be comparable to or even much larger than the sample size . A key assumption that often makes large-scale inference feasible is the sparsity of signals, meaning that only a small fraction of covariates contribute to the response when is large compared to

. High-dimensional modeling with dimensionality reduction and feature selection plays an important role in these problems. A sparse modeling procedure typically produces a sequence of candidate models, each involving a possibly different subset of covariates. An important question is how to compare different models in high dimensions when models are possibly misspecified.

The problem of model selection has a long history with numerous contributions by many researchers. Among others, well-known model selection criteria are the AIC (Akaike, 1973 and 1974) and BIC (Schwarz, 1978), where the former is based on the Kullback-Leibler (KL) divergence principle of model selection and the latter is originated from the Bayesian principle. A great deal of work has been devoted to understanding and extending these methods. See, for example, Bozdogan (1987), Foster and George (1994), Konishi and Kitagawa (1996), Ing (2007), Chen and Chen (2008), Chen and Chan (2011), Ing and Lai (2011), Liu and Yang (2011), and Chang et al. (2014) in different model settings. The connections between the AIC and cross-validation have been investigated in Stone (1977), Hall (1990), and Peng et al. (2013) in various contexts. Model selection criteria such as AIC and BIC are frequently used for tuning parameter selection in regularization methods. For instance, mode selection in the context of penalized likelihood methods has been studied in Fan and Li (2001), Wang et al. (2007), Wang et al. (2009), Zhang et al. (2010), and Fan and Tang (2013). In particular, Fan and Tang (2013) showed that classical information criteria such as AIC and BIC can be inconsistent for model selection when the dimensionality grows very fast relative to sample size .

Most existing work on model selection usually makes an implicit assumption that the model under study is correctly specified or of fixed dimensions. For example, White (1982) laid out a general theory of maximum likelihood estimation in misspecified models for the case of fixed dimensionality and independent and identically distributed (i.i.d.) observations. Recently, Lv and Liu (2014) investigated the problem of model selection with model misspecification and derived asymptotic expansions of both KL divergence and Bayesian principles in misspecified generalized linear models, leading to the generalized AIC and generalized BIC, for the case of fixed dimensionality. A specific form of prior probabilities motivated by the KL divergence principle leads to the generalized BIC with prior probability (-L111Here we use this notation to emphasize that the criterion is for the low-dimensional case, while reserving the original notation in Lv and Liu (2014) for the high-dimensional counterpart.). Yet model misspecification and high dimensionality are both common in real applications. Thus a natural and important question is how to characterize the impact of model misspecification on model selection in high dimensions. We intend to provide some answer to this question in this paper. Our analysis enables us to suggest the generalized BIC with prior probability () that involves a logarithmic factor of the dimensionality in penalizing model complexity.

To gain some insights into the challenges of the aforementioned problem, let us consider a motivating example. Assume that the response

depends on the covariate vector

through the functional form

 Y=f(X1)+f(X2−X3)+f(X4−X5)+ε, (1)

where and the remaining setting is as specified in Section 4.1.2. Consider sample size and vary dimensionality

from 200 to 3200. Without prior knowledge about the true model structure, we take the linear regression model

 \bf y=\bf X\boldmathβ+\boldmathε (2)

as the working model, with the same notation therein, and apply some information criteria to hopefully recover the oracle working model consisting of the first five covariates. When , the traditional AIC and BIC, which ignore model misspecification, tend to select a model with size larger than five. As expected, -L works reasonably well by selecting the oracle working model half of the time. However, when is increased to , these methods fail to select such a model with significant probability and the prediction performance of the selected models deteriorates. This motivates us to study the problem of model selection in high-dimensional misspecified models. In contrast, our newly suggested can recover the oracle working model with significant probability in this challenging scenario.

The main contributions of our paper are threefold. First, we establish a systematic theory of model selection with model misspecification in high dimensions. The asymptotic expansions for different model selection principles involve delicate and challenging technical analysis. Second, our work provides rigorous theoretical justification of the covariance contrast matrix estimator that incorporates the effect of model misspecification and is crucial for practical implementation. Such an estimator is shown to be consistent in the general setting of high-dimensional misspecified models. Third, we suggest the use of a new prior in the expansion for involving the term. This criterion has connections to the model selection criteria in Chen and Chen (2008) and Fan and Tang (2013) with the factor for the case of correctly specified models.

The rest of the paper is organized as follows. Section 2 introduces the setup for model misspecification. We present some key asymptotic properties of the quasi-maximum likelihood estimator and provide asymptotic expansions of KL divergence and Bayesian model selection principles in high dimensions in Section 3. Section 4 demonstrates the performance of different model selection criteria in high-dimensional misspecified models through several simulation and real data examples. We provide some discussions of our results and possible extensions in Section 5. The proofs of some main results are relegated to the Appendix. Additional technical proofs and numerical results are provided in the Supplementary Material.

## 2 Model misspecification

Assume that conditional on the covariates, the -dimensional random response vector has a true unknown distribution with density function

 gn(\bf y)=n∏i=1gn,i(yi), (3)

where . Model (3) entails that all components of Y are independent but not necessarily identically distributed. Consider a set of covariates out of all available covariates, where can be much larger than . Denote by X the corresponding deterministic design matrix. To simplify the technical presentation, we focus on the case of deterministic design. In practice, one chooses a family of working models to fit the data. Model misspecification generally occurs when the family of distributions is misspecified or some true covariates are missed.

Since the true model is unknown, we choose a family of generalized linear models (GLMs) with a canonical link as our working models, each of which has density function

 fn(\bf z,\boldmathβ)dμ0(\bf z)=n∏i=1f0(zi,θi)dμ0(zi)≡n∏i=1exp[ziθi−b(θi)]dμ(zi), (4)

where , with , is a smooth convex function, is the Lebesgue measure, and is some fixed measure on . Assume that is continuous and bounded away from , X is of full column rank , and are bounded. Clearly is a family of distributions in the regular exponential family and may not contain ’s.

To ease the presentation, define two vector-valued functions and , and a matrix-valued function . For any -dimensional random vector Z with distribution given by (4), it holds that and . The density function (4) can be rewritten as

 fn(\bf z,\boldmathβ)=exp[\bf zT% \bf X\boldmathβ−\bf 1T\bf b(\bf X% \boldmathβ)]n∏i=1dμdμ0(zi),

where denotes the Radon-Nikodym derivative. Given the observations y and X, this gives the quasi-log-likelihood function

 ℓn(\bf y,\boldmathβ)=logfn(\bf y,% \boldmathβ)=\bf yT\bf X\boldmathβ−% \bf 1T\bf b(\bf X\boldmathβ)+n∑i=1logdμdμ0(yi). (5)

The quasi-maximum likelihood estimator (QMLE) of the -dimensional parameter vector is defined as

 ˆ\boldmathβn=argmax\boldmathβ∈Rdℓn(\bf y,\boldmathβ), (6)

which is the solution to the score equation . This equation becomes the normal equation in the linear regression model.

The KL divergence (Kullback and Leibler, 1951) of the model from the true model can be written as . The best working model that is closest to the true model under the KL divergence has parameter vector , which solves the equation

 \bf XT[E\bf Y−\boldmathμ(% \bf X\boldmathβ)]=\bf 0. (7)

We introduce two matrices that play a key role in model selection with model misspecification. Define

 cov[\boldmathΨn(\boldmathβn,0)]=cov(\bf XT\bf Y)=\bf XTcov(\bf Y)\bf X=\bf Bn (8)

with by the independence assumption,

 ∂2I(gn;fn(⋅,\boldmathβ))∂%\boldmath$β$2=−∂2ℓn(\bf y,% \boldmathβ)∂\boldmathβ2=\bf XTΣ(\bf X\boldmathβ)\bf X=\bf An(\boldmathβ), (9)

and . Observe that and are the covariance matrices of under the best misspecified GLM and the true model , respectively.

## 3 High-dimensional model selection in misspecified models

We now present the asymptotic expansions of both KL divergence and Bayesian model selection principles in high-dimensional misspecified GLMs.

### 3.1 Technical conditions and asymptotic properties of QMLE in high dimensions

We list a few technical conditions required to prove the asymptotic properties of QMLE with diverging dimensionality. Denote by the vector -norm and the matrix operator norm.

###### Condition 1.

There exists some constant such that for each , for any , where .

###### Condition 2.

There exist positive constants , , and such that for sufficiently large , and , where , , and . Moreover, .

Assume and .

###### Condition 4.

Assume

 max\boldmathβ1,⋯,\boldmathβd∈Nn(δn)∥˜\bf Vn(\boldmathβ1,⋯,\boldmathβd)−\bf Vn∥2=O(dn−1/2δn),

where and with a matrix with th row the corresponding row of for each . Moreover, is a polynomial order of .

Conditions 1 and 2 are some basic assumptions for establishing the consistency of the QMLE in Theorem 1. In particular, Condition 1

assumes that the standardized response has sub-Gaussian distribution which facilitates the derivation of the deviation probability bound. Conditions

24 are similar to those in Lv and Liu (2014), except for some major differences due to the high-dimensional setting. In particular, Condition 2

allows the minimum eigenvalue of

to converge to zero at a certain rate as increases in a neighborhood of . Such a neighborhood is wider compared to that for the case of fixed dimensionality. The dimensionality of the QMLE is allowed to diverge with . Conditions 3 and 4 are imposed to establish the asymptotic normality of .

###### Theorem 1.

(Consistency of QMLE). Under Conditions 12, the QMLE satisfies and further with probability for some large positive constant .

###### Theorem 2.

(Asymptotic normality). Under Conditions 14, the QMLE satisfies

 \bf Dn\bf Cn(ˆ\boldmathβn−% \boldmathβn,0)D⟶N(\bf 0,Im),

where and is any matrix such that .

Theorems 1 and 2 establish the consistency and asymptotic normality of the QMLE in high-dimensional misspecified GLM. These results provide the theoretical foundation for the technical analyses in Sections 3.23.4. The asymptotic theory of the QMLE reduces to that of the maximum likelihood estimator (MLE) when the model is correctly specified. Our results extend those in Lv and Liu (2014) for the case of fixed dimensionality. We next introduce a few additional conditions for deriving the asymptotic expansions of the two model selection principles.

###### Condition 5.

There exists some constant with such that and for sufficiently large , , where constant is given in Theorem 1.

###### Condition 6.

Assume that satisfies

 inf\boldmathβ∈Nn(2δn)π(h(\boldmathβ))≥c2 and sup\boldmathβ∈Rdπ(h(\boldmathβ))≤c3 (10)

with some constants, and .

###### Condition 7.

Assume that , , and are Lipschitz (in operator norm) with constant in , and with constant , where represents the Hadamard (componentwise) product and denotes the entrywise matrix -norm.

###### Condition 8.

Assume with some constant .

The first part of Condition 5

holds naturally for linear and logistic regression models, and is introduced to accommodate the case of Poisson regression. The second part of Condition

5 is a mild assumption ensuring that the restricted QMLE coincides with its unrestricted version with significant probability, which is key to the asymptotic expansion of the KL divergence principle in high dimensions in Theorem 3. It is worth mentioning that the set grows with , while the neighborhood is asymptotically shrinking.

Condition 6 is similar to the one in Lv and Liu (2014), except that we need to specify the rate at which converges to zero. Condition 7 requires the Lipschitz property for those matrix-valued functions. The bound on the entry-wise matrix -norm of the design matrix is mild. Condition 8 is a sensible assumption bounding the effect of the model bias. In particular, Conditions 7 and 8 are introduced only for proving the consistency of the covariance contrast matrix in the general setting in Theorem 4.

### 3.2 Generalized AIC in misspecified models

Given a sequence of subsets of the full model , we can construct a sequence of QMLE’s by fitting the GLM (4). A natural question is how to compare those fitted models. The QMLEs become the MLEs when the model is correctly specified.

Akaike’s principle of model selection is choosing the model that minimizes the KL divergence of the fitted model from the true model , that is,

 m0=argminm∈{1,⋯,M}I(gn;fn(⋅,ˆ\boldmathβn,m)), (11)

where

 I(gn;fn(⋅,ˆ\boldmathβn,m))=Eloggn(˜\bf Y)−ηn(ˆ\boldmathβn,m) (12)

with and an independent copy of Y. Thus

 m0=argmaxm∈{1,⋯,M}ηn(ˆ\boldmathβn,m)=argmaxm∈{1,⋯,M}E˜\bf Yℓn(˜\bf Y,ˆ\boldmathβn,m),

which shows that Akaike’s principle of model selection is equivalent to choosing the model that maximizes the expected log-likelihood with the expectation taken with respect to an independent copy of Y. Using the asymptotic theory of MLE, Akaike (1973) showed that for the case of i.i.d. observations, can be asymptotically expanded as , which leads to the seminal AIC for comparing competing models:

 AIC(\bf y,M)=−2ℓn(\bf y,ˆ%\boldmath$β$n)+2|M|. (13)

For simplicity, we drop the last term in (5) which does not depend on , and redefine the quasi-log-likelihood as hereafter.

###### Theorem 3.

Under Conditions 15, we have with probability tending to one,

 Eηn(ˆ\boldmathβn)=Eℓn(%y,ˆ\boldmathβn)−tr(\bf Hn)+o{tr(\bf Hn)}, (14)

where .

Theorem 3 generalizes the corresponding result in Lv and Liu (2014) to high dimensions. However, we would like to point out that our new technical analysis differs substantially from theirs due to the challenges of diverging dimensionality. The asymptotic expansion in Theorem 3 enables us to introduce the generalized AIC (GAIC) as follows.

###### Definition 1.

We define of model as

 GAIC(\bf y,M;Fn)=−2ℓn(\bf y,ˆ\boldmathβn)+2tr(ˆ\bf Hn), (15)

where is a consistent estimator of specified in Section 3.3.

When the model is correctly specified, it holds that , under which GAIC reduces to AIC asymptotically. We demonstrate in the simulation studies that GAIC can improve over the original AIC substantially in the presence of model misspecification.

### 3.3 Estimation of covariance contrast matrix

From the asymptotic expansions for the GAIC, GBIC, and (the latter two to be introduced in Section 3.4), a common term is the covariance contrast matrix , which characterizes the impact of model misspecification. Therefore, providing an accurate estimator for such a matrix is of vital importance in the application of these information criteria.

Consider the plug-in estimator with and defined as follows. Since the QMLE provides a consistent estimator of in the best misspecified GLM , a natural estimate of matrix is given by

 ˆ\bf An=\bf An(ˆ\boldmathβn)=\bf XTΣ(\bf Xˆ\boldmathβn)\bf X. (16)

When the model is correctly specified, the following simple estimator

 ˆ\bf Bn=\bf XTdiag{[\bf y% −\boldmathμ(\bf Xˆ\boldmathβn)]∘[\bf y−\boldmathμ(\bf Xˆ\boldmathβn)]}\bf X (17)

gives an asymptotically unbiased estimator of

.

###### Theorem 4.

(Consistency of estimator) Assume that Conditions 13 and 78 hold, the eigenvalues of and are bounded away from 0 and , and . Then the plug-in estimator satisfies and .

Theorem 4 improves the result in Lv and Liu (2014) in two important aspects. First, the consistency of the covariance contrast matrix estimator was previously justified in Lv and Liu (2014) for the case of correctly specified model. Our new result shows that the simple plug-in estimator still enjoys consistency in the general setting of model misspecification. Second, the result in Theorem 4 holds for the case of diverging dimensionality. These theoretical guarantees are crucial to the practical implementation of those information criteria. Our numerical studies reveal that such an estimate works well in a variety of model misspecification settings.

### 3.4 Generalized BIC in misspecified models

Given a set of competing models , a popular Bayesian model selection procedure is to first put nonzero prior probability on each model , and then choose a prior distribution for the parameter vector in the corresponding model. Assume that the density function of is bounded in with and locally bounded away from zero throughout the domain. The Bayesian principle of model selection is to choose the most probable model a posteriori, that is, choose model such that

 m0=argmaxm∈{1,⋯,M}S(\bf y,Mm;Fn), (18)

where the log-marginal-likelihood is

 S(\bf y,Mm;Fn)=log∫αMmexp[ℓn(\bf y,\boldmathβ)]dμMm(\boldmathβ) (19)

with the log-likelihood as in (5) and the integral over .

To ease the presentation, for any we define a quantity

 ℓ∗n(\bf y,\boldmathβ)=ℓn(%y,\boldmathβ)−ℓn(\bf y,ˆ% \boldmathβn), (20)

which is the deviation of the quasi-log-likelihood from its maximum. Then from (19) and (20), we have

 S(\bf y,Mm;Fn)=ℓn(\bf y,ˆ% \boldmathβn)+logEμMm[Un(\boldmathβ)n]+logαMm, (21)

where .

###### Theorem 5.

Under Conditions 13 and 6, we have with probability tending to one,

 S(\bf y,M;Fn) =ℓn(\bf y,ˆ\boldmathβn)−logn2|M|+12log|\bf Hn|+logαM (22) +log(2π)2|M|+logcn+o(1),

where and .

The asymptotic expansion of the Bayes factor in Theorem

5 leads us to introduce the generalized BIC (GBIC) as follows.

###### Definition 2.

We define GBIC of model as

 GBIC(\bf y,M;Fn)=−2ℓn(\bf y,ˆ\boldmathβn)+(logn)|M|−log|ˆ\bf Hn|, (23)

where is a consistent estimator of .

It is clear from (23) that GBIC contains an extra term compared to BIC that replaces the factor with in penalizing model complexity in (13). This additional term reflects the effect of model misspecification. When the model is correctly specified, GBIC reduces to BIC asymptotically.

The choice of the prior probabilities is important in high dimensions. Lv and Liu (2014) suggested prior probability for each candidate model , where the quantity is defined as

 Dm=E[I(gn;fn(⋅,ˆ\boldmathβn,m))−I(gn;fn(⋅,\boldmathβn,m,0))] (24)

and the subscript indicates a particular candidate model. The motivation is that the further the QMLE is away from the best misspecified GLM , the lower prior we assign to that model. In the high-dimensional setting when can be much larger than , it is sensible to take into account the complexity of the space of all possible sparse models with the same size as . This observation motivates us to consider a new prior of the form

 αMm∝(pd)−1e−Dm (25)

with . Such a complexity factor has been exploited in the extended BIC (EBIC) in Chen and Chen (2008), who showed that using the term with some constant , the EBIC can be model selection consistent for with some positive constant satisfying .

Under the assumption of , an application of Stirling’s formula shows that up to an additive constant, it holds that . Thus for the prior defined in (25), we have an additional term in the asymptotic expansion for GBIC. When is of order with some constant , this new term is of the same order as . When is of order with some constant , the term dominates that involving . Fan and Tang (2013) proposed a similar term term to ameliorate the BIC for the case of correctly specified models with non-polynomially growing dimensionality . The following theorem provides the asymptotic expansion of the Bayes factor with the particular choice of prior in (25).

###### Theorem 6.

Assume that Conditions 16 hold, with some normalization constant, and . Then we have with probability tending to one,

 S(\bf y,M;Fn) =ℓn(\bf y,ˆ\boldmathβn)−(logp∗)|M|−12tr(\bf Hn)+12log|\bf Hn| (26) +log(Ccn)+o(1),

where , , and .

Similarly to the GBIC, we now define a new information criterion, the generalized BIC with prior probability (), based on Theorem 6.

###### Definition 3.

We define of model as

 GBICp(\bf y,M;Fn)=−2ℓn(\bf y% ,ˆ\boldmathβn)+2(logp∗)|M|+tr(ˆ\bf Hn)−log|ˆ\bf Hn|, (27)

where is a consistent estimator of .

In correctly specified models, the term is asymptotically close to when is a consistent estimator of . Thus compared to BIC with factor , the contains a larger factor when grows non-polynomially with . This leads to a heavier penalty on model complexity similarly as in Fan and Tang (2013). As pointed out in Lv and Liu (2014), the right hand side of (27) can be viewed as a sum of three terms: the goodness of fit, model complexity, and model misspecification. An important distinction with the low-dimensional counterpart of is that our new criterion explicitly takes into account the dimensionality of the whole feature space.

## 4 Numerical studies

The asymptotic expansions of both KL divergence and Bayesian principles in Section 3 have enabled us to introduce the GAIC, GBIC, and for model selection in high dimensions with model misspecification. We now investigate their performance in comparison to the information criteria AIC, BIC, and -L in high-dimensional misspecified models via simulation examples as well as two real data sets. For each simulation study, we set the number of repetitions to be 100 and examined the scenarios when the dimensionality grows (, 400, 1600, and 3200).

### 4.1 Simulation examples

#### 4.1.1 Sparse linear regression with interaction and weak effects

The first model we consider is the following high-dimensional linear regression model with interaction and weak effects

 \bf y=\bf X\boldmathβ+\bf xp+1+% \boldmathε, (28)

where is an design matrix, is an interaction term which is the product of the first two covariates, the rows of X are sampled as i.i.d. copies from , and the error vector . We set , , and . Although the data was generated from model (28), we fit the linear regression model (2) without interaction, which is a typical example of model misspecfication. In view of (28), the true model involves only the first ten covariates in a nonlinear form. Since the other covariates are independent of those ten covariates, the oracle working model is as argued in Lv and Liu (2014). Due to the high dimensionality, it is computationally prohibitive to implement the best subset selection. Therefore, we first applied the regularization method SICA (Lv and Fan, 2009) to build a sequence of sparse models and then selected the final model using a model selection criterion. In practice, one can apply any preferred variable selection procedure to obtain a sequence of candidate models.

In addition to comparing the models selected by different information criteria, we also considered the estimate based on the oracle working model as a benchmark and used both measures of prediction and variable selection. Denote by the selected model. We split the oracle working model into the set of strong effects and that of weak effects . It is interesting to observe that all criteria tend to miss the entire set of weak effects due to their very low signal strength. Therefore, we focused on comparing the model selection performance in recovering the set of strong effects .

We report the strong effect consistent selection probability (the portion of simulations where ), the strong effect inclusion probability (the portion of simulations where ), and the prediction error with an estimate and an independent observation. To evaluate the prediction performance of different criteria, we calculated the average prediction error on an independent test sample of size 10,000. The results for prediction error and model selection performance are summarized in Table 1. To save space, the number of false positives and the numbers of false negatives for strong effects and weak effects , respectively, are reported in Table 6 in the Supplementary Material.

It is clear that as the dimensionality increases, the consistent selection probability tends to decrease and the prediction error tends to increase for all information criteria. Generally speaking, GAIC improved over AIC, and GBIC, -L, and performed better than BIC in terms of both prediction and variable selection. In particular, the model selected by our new information criterion delivered the best performance with the smallest prediction error and highest strong effect consistent selection probability across all settings.

Meanwhile it is also interesting to see what results different model selection criteria lead to when the model is correctly specified. To this end, we regenerate the solution path based on the linear regression model with the interaction added. The same performance measures are calculated for this scenario with the results reported in Tables 2 and 7, where the latter table is included in the Supplementary Material. A comparison of these results with those in Tables 1 and 6 gives several interesting observations. First, all model selection criteria have a better performance when the model is correctly specified in terms of both model selection and prediction. Second, it is worth noting that while all model selection criteria except AIC work reasonably well for the correctly specified model, all but the newly proposed have a very low consistent selection probability under both model misspecification and high dimensionality. Third, it is interesting to see that outperforms the existing methods even under the correctly specified model in terms of consistent selection probability.

#### 4.1.2 Multiple index model

We next consider another model misspecification setting that involves the multiple index model

 Y=f(β1X1)+f(β2X2+β3X3)+f(β4X4+β5X5)+ε, (29)

where the response depends on the covariates only through the first five ones but with nonlinear functions and . Here the design matrix was generated as in Section 4.1.1. We set the true parameter vector , , and . Note that the oracle working model is for this example. Although the data was generated from model (29), we fit the linear regression model (2). The results are summarized in Tables 3 and 8 (the latter available in Supplementary Material). The consistent selection probability and inclusion probability are now calculated based on .

In general, the conclusions are similar to those in Example 4.1.1. An interesting observation is the comparison between -L and in terms of model selection. While -L is comparable to when the dimension is not large (), the difference between these two methods increases as the dimensionality increases. In the case when , has 77% success probability of consistent selection, while all the other criteria have at most 5% success probability. This confirms the necessity of including the factor in the model selection criterion to take into account the high dimensionality, which is in line with the conclusion in Fan and Tang (2013) for the case of correctly specified models.

#### 4.1.3 Logistic regression with interaction

Our last simulation example is high-dimensional logistic regression with interaction. We simulated 100 data sets from the logistic regression model with interaction and an -dimensional parameter vector

 \boldmathθ=\bf X\boldmathβ+2\bf xp+1+2\bf xp+2, (30)

where is an design matrix, and are two interaction terms, and the rest is the same as in (28). For each data set, the -dimensional response vector y

was sampled from the Bernoulli distribution with success probability vector

with