1 Introduction
Regularized (or penalized) likelihood methods have become widely used in recent years due to the increased availability of large data sets. These methods operate by maximizing the penalized likelihood function
(1) 
with respect to , where is the working loglikelihood function, is the total number of predictors, and
is a penalty function that penalizes against model complexity and the size of the estimated coefficients. The working loglikelihood is used to justify the first part of the function (e.g., in Least Squares, the working loglikelihood is based on the Gaussian distribution). As demonstrated in Sections 2 and 3, many of the results discussed in this paper are valid even if the working loglikelihood is misspecified. With these methods, increasing the amount of regularization increases the number of estimated coefficients that are set equal to zero thus performing “automatic” variable selection through the datadependent choice of the regularization parameter,
. In contrast, variable selection in classical regression is commonly done using the Leaps and Bounds algorithm (Furnival and Wilson, 1974), which becomes infeasible when the number of predictors is much larger than 30 (Hastie et al., 2009). For most penalty functions efficient algorithms exist to compute the estimated models over a regularization path making it possible to do variable selection in high dimensions.The performance of the estimated model heavily depends on the choice of the regularization parameter. In regularized regression several classical model selection procedures have been heuristically applied as selectors of this parameter including information criteria such as Akaike’s information criterion (
; Akaike, 1973), the Bayesian information criterion (; Schwarz, 1978), and Generalized crossvalidation (; Craven and Wahba, 1978) as well as databased selection procedures such as fold crossvalidation (see, e.g., Fan and Li, 2001, Zou et al., 2007, Wang et al., 2007, and Zhang et al., 2010 for applications of these selectors to penalized regression estimators). The statistical properties of these model selection procedures have been widely studied in the context of classical regression and an ongoing research problem is to determine if these properties carry over to the context of penalized regression.The asymptotic performance of model selection procedures can be studied under two important and distinct settings: (1) when the true model is not among the candidate models (the “nontrue model world”) and (2) when the true model is among the candidate models (the “true model world”). In the nontrue model world a reasonable goal is efficient model selection, meaning that we would like to select the model that asymptotically performs the best amongst the candidate models. In contrast, in the truemodel world most of the literature focuses on consistent
model selection, meaning that the probability that the true model is chosen is asymptotically one. In general, a model selection procedure cannot be both consistent and efficient
(Shao, 1997; Yang, 2005). Although the nontrue model world has been extensively studied in classical regression (e.g., Shibata, 1981, Li, 1987, Hurvich and Tsai, 1989, 1991, Shao, 1997, and Burnham and Anderson, 2002) the majority of the research on model selection in penalized regression has focused on the true model world (e.g., Leng et al., 2006, Zou et al., 2007, and Wang et al., 2007). We feel that the nontrue model world is more realistic in many situations since the datagenerating process is likely to be too complex to know exactly; this is the essence of George Box’s famous admonition that “all models are wrong, but some are useful” (Box, 1979). This setting should be of particular interest to researchers and data analysts in areas such as social science and environmental health where a large number of predictors are expected to influence the dependent variable (too many to include in model fitting; Gelman, 2010) as well as machine learning where the goal is typically not to uncover the true data generating process but rather to find a model that can predict well.
In the context of generalized linear models (GLMs), Zhang et al. (2010) (hereafter ZLT) proposed the use of a “GICtype” criterion,
for choosing the regularization parameter for nonconcave penalized estimators in both the nontrue model world and the truemodel world. Here is the estimator that maximizes (1) for a specific ,
is the effective degrees of freedom and the loglikelihood function corresponds to a member of the exponential family, i.e.
where the form of functions , , and depends on the specified distribution and is the dispersion parameter (see e.g. McCullagh and Nelder, 1989). They showed that “AICtype” versions of () are efficient in the former case, while “BICtype” versions of ( and ) are consistent in the latter case.
In the Gaussian model, takes on a form that includes the true error variance , and the proofs operate under the assumption that this is known or that a consistent estimator is available. However, if the true model is not included in the set of candidate models then a consistent estimator of the true error variance may not be available (Shao, 1997) making the efficiency proofs of ZLT not applicable in practice. This motivates us to extend the ZLT results in various ways. First, we show that the feasible version of , which corresponds to the wellknown measure (Mallows, 1973), is in fact efficient in the nontrue model world. Second, we show that and , which do not require a consistent estimator of , are also efficient. Third, we show that although several model selection procedures may be asymptotically optimal, performance varies in finite samples. Specifically, we study performance when the number of predictors is allowed to be large relative to the sample size and show that , , , and all have a tendency to sometimes catastrophically overfit (lead to values approaching 0). In classical regression Hurvich and Tsai (1989) showed that has a tendency to select overly complex models when the dimension of the maximum candidate model is large relative to the sample size and proposed a corrected version of (). We show that is also efficient, but avoids the tendency to select overly complex models. We use Monte Carlo simulations to illustrate the properties of these methods in finite samples and compare their performance against the datadependent method 10fold .
For GLMs where there is no dispersion parameter (e.g., probit and logistic regression or the Poisson loglinear model), there is no difference between
and . However, in their proof ZLT restrict the set of candidate models to ones where the estimated parameter converges in probability to the true parameter uniformly. To weaken this assumption we employ the result from White (1982)that the maximumlikelihood estimator converges almost surely to a “pseudotrue” parameter (the parameter that minimizes the KullbackLeibler (KL) loss function) when the model is misspecified and prove the efficiency of
under a weaker set of assumptions. These results, and the results for the Gaussian model, apply to a wide range of penalized likelihood estimators, including both nonconcave penalized estimators and the wellknown Least absolute shrinkage and selection operator (Lasso) estimator (Tibshirani, 1996).The remainder of the paper is organized as follows. Section 2 focuses on penalized regression and establishes the efficiency results for , , and without the assumption that the true population variance is known or that a consistent estimator exists. Section 3 focuses on GLMs where there is no dispersion parameter and establishes the efficiency of for a general set of candidate models. Section 4 presents simulation results that explore the finitesample behavior of the different selectors when the number of predictors is allowed to be large relative to the sample size. An empirical example that highlights the varying performance of the selectors is presented in Section 5. Concluding remarks are given in Section 6. The main proofs are included in the appendix with some auxiliary results included in the supplementary material.
2 Gaussian Model
For ease of notation, in this section, and for the remainder of the paper, we suppress the subscript where we feel it is clear that a variable depends on the sample size.
To study model selection in regularized regression we consider the model
where is the
response vector,
is a unknown mean vector and the entries of the error vector are independent and identically distributed (iid) with mean 0 and variance . The mean vector is estimated by where is a deterministic matrix of predictors and is the estimator that minimizes the penalized least squares functionwith respect to .
Adopting the notation from ZLT, we let the index set denote the class of all candidate models and we assume that is the largest model in . For any , we define
to be the number of predictor variables included in the candidate model. We further define the least squares estimated mean vector by
where is the matrix of predictors that are included in candidate model and is the corresponding vector of the estimated least squares coefficients. The associated projection matrix is . For a given , we define to be the model whose predictors are those with nonzero coefficients in the penalized estimator and let denote the effective degrees of freedom. The least squares estimated mean vector based on the model is denoted by . In this equation, is the matrix of predictors whose coefficients are not shrunk to zero in the penalized estimator and are the estimated coefficients from the least squares model fit using these predictors. The associated projection matrix in this case is defined as .If we assume that we are in the nontrue model world, then a reasonable goal is efficient model selection. The loss is commonly used to assess the predictive performance of an estimator and is calculated as
If we let denote the regularization parameter selected by a given selection procedure, then the procedure is defined to be asymptotically loss efficient if
and is said to be an asymptotically loss efficient estimator.
For the efficiency proofs we further require the following notation. In classical regression the risk function is defined as
where denotes expectation under the true model and . Letting denote the number of predictors with nonzero coefficients in the penalized estimator , we further define the function
which is a random variable.
2.1 Model Selection Procedures
fold is commonly used to select tuning parameters in both the statistical and machine learning literature. It operates by first randomly dividing the data set into roughly equally sized subsets, then for each subset, the prediction error is computed based on the model fit using the data excluding that subset. The tuning parameter that minimizes the average square error computed across the subsets is then selected. In classical regression it has been shown that fold should have the same asymptotic properties as with
(Shao, 1997). Applying this result, 10fold should have the same asymptotic performance as with , suggesting that 10fold should be efficient. Under the assumption of an orthonormal design matrix Leng et al. (2006) showed that if the Lassoestimated model minimizes the prediction error then it will fail to select the true model with nonzero probability. The authors noted that this suggests that fold is inconsistent, but to our knowledge, the asymptotic properties of fold have not been fully established in the context of penalized regression. While a rigorous extension of the classical theory for fold to penalized regression is beyond the scope of this paper, the simulation results suggest that the kfold is efficient in the current context.
In addition to 10fold CV, we study the performance of several information criteria. Specifically, we consider
and
In the above we define
and
With the exception of 10fold CV, all of the above model selection procedures require a definition of the effective degrees of freedom for the penalized regression method. In what follows, we use a heuristic definition and define the effective degrees of freedom to be the number of nonzero coefficients in and denote this by . Zou et al. (2007)
proved that the number of nonzero coefficients is an unbiased estimator of the degrees of freedom for the Lasso. For SCAD,
Fan and Li (2001) proposed setting the degrees of freedom equal to the trace of the approximate linear projection matrix. Based on Proposition 1 from ZLT, our efficiency proofs would still hold if this alternate definition is used.2.2 Efficiency Results
We show here that assuming that the true model is not in the set of candidate models, , , , and are efficient selectors of the regularization parameter. The dimension of the full model, , is allowed to tend to infinity with but it is assumed that . The efficiency proofs operate under the same assumptions as those of ZLT, which are presented here for completeness:

exists and its largest eigenvalue is bounded by a constant number C.

, for some positive integer .

The risks of the least squares estimators satisfy

where is a vector where for all such that and is equal to otherwise.
The first three assumptions are common in the literature on model selection. Assumption (A1) requires the matrix of predictors to have full column rank and (A2) implies that efficiency can still apply even when penalized least squares is used but the true distribution of the error terms is not Gaussian. Assumption (A3) puts a restriction on how close the candidate models can be to the true model and precludes any scenario where the true model is included in the set of candidate models. The last assumption, (A4), is the only assumption that involves the penalty function and ZLT provided the following three sufficient conditions for the assumption to be satisfied.

for all for some constant .

For any , for some constant .

as .
As pointed out by an anonymous referee, assumption (A3) restricts the size of the set of candidate models. The classical literature on model selection primarily worked with nested subsets and did not require the consideration of all subsets (e.g., Shibata (1981), Shao (1997), and Li (1987)); however, since the subsets selected by methods such as the Lasso or SCAD are data dependent, the set of candidate models is random and we cannot rule out any particular candidate model a priori. Therefore we need to include all
subsets in order to use the theory from classical model selection. Alternatively, if the data analyst can assume that the error terms are normally distributed then assumption (A3) can be replaced by a weaker assumption from
Shibata (1981).
For any , ,
The following lemma details the restrictions on the behavior of .
Assume that for all sufficiently large
(2) 
for some positive constant and some constant . Then (A3) will hold if
(3) 
and (A3) will hold if
(4) 
The proof is presented in the appendix. This lemma shows that under (A3) can at most grow logarithmically with ; however, polynomial growth rates are allowed under assumption (A3) so long as for . Specific values of are worked out for the simulation examples considered in Section 4.1.
The asymptotic efficiency of is given by the following result.
Theorem 2.1.
Assuming (A1)(A4) hold and that as , the regularization parameter, , selected by minimizing yields an asymptotically loss efficient estimator, .
To further establish the efficiency of , and we require the following two theorems. The first proves the efficiency of with the true error variance replaced by the estimated error variance based on the candidate model.
Theorem 2.2.
Assuming (A1)(A4) hold and that as , the regularization parameter, , selected by minimizing
yields an asymptotically loss efficient estimator, . The same result holds under normality of the error terms with (A3) replacing (A3).
Next, we prove that any procedure that is asymptotically equivalent to is also efficient.
Theorem 2.3.
Assuming (A1)(A4) hold and that as , any information criterion that can be written in the form
where
(C1) 
and
(C2) 
is an asymptotically loss efficient procedure for selecting . The same result holds under normality of the error terms with (A3) replacing (A3).
Condition (C2) in Theorem 2.3 is a stronger assumption than in the analogous result established by Theorem 4.2 in Shibata (1980) for selecting the optimal order of a linear process, but Theorem 2.3 is sufficient to show that , , and are asymptotically loss efficient model selection procedures for the regularization parameter. All three methods can be shown to satisfy (C1) and (C2) using Taylor series expansions. The details are provided in the supplementary material.
Remark 1. The efficiency proofs in this section make use of the results from Li (1987), which operate under assumptions (A1)(A3). Similar results exist in Shibata (1981) if the error terms are normally distributed and (A3) is substituted for (A3). The efficiency of , , and can be shown in a similar manner in this setting.
3 GLMs with No Dispersion Parameter
We now generalize our efficiency results to a broader class of models by studying the asymptotic performance of as a selector of when the likelihood function is misspecified as a generalized linear model (GLM) and prove that it is asymptotically loss efficient. We assume that the data
are independent with common unknown probability density function
and that and . To approximate this distribution, we consider a family of GLMs where the density of each candidate model is given bywhere , for . Here we have assumed that there is no dispersion parameter, and we further assume that is three times differentiable and that for all . All of these assumptions would hold for probit or logistic regression and the Poisson loglinear model.
A reasonable objective in this setting is to minimize two times the average KullbackLeibler (KL) loss function, which is defined as
For a given sample size , we define as the minimizer of the KL loss. By Theorem 1 in Lv and Liu (2010) we have that is the unique solution to the equation
(5) 
If for some true parameter for any , then . However, if we assume that we are in the nontrue model world, then is not completely specified by any of the candidate models and we refer to as the “pseudotrue parameter” based on the candidate model .
Similarly to the Gaussian model, for a given , we take and denote the maximumlikelihood estimator based on the model by . If we let denote the regularization parameter selected by a given selection procedure, then the procedure is defined to be asymptotically loss efficient if
and is said to be an asymptotically loss efficient estimator.
ZLT studied the asymptotic performance of in a similar setting. To establish asymptotic loss efficiency, ZLT restricted the set of candidate models to the set
where . For this restricted set of models, the maximumlikelihood estimator converges uniformly to the true parameter. If this set is known in practice, then the model selection process reduces to selecting the most parsimonious model in this set. This class of models would rarely be known in practice, so this motivates us to weaken this assumption and to prove the efficiency of for a general set of candidate models.
Under the regularity conditions (R1)(R2) given in the supplementary material, White (1982) proved that , almost surely, and established the asymptotic normality of under (R1)(R4). With the additional condition (R5), Nishii (1988) applied a Taylor expansion to show that
(6) 
for n sufficiently large, where and for .
We define the risk function of the maximumlikelihood estimator to be . From Theorem 4 of Lv and Liu (2010), under (R1)(R6),
where and . Similarly to the Gaussian model, we further define the random variable
With these results and the following assumptions, we can prove the efficiency of .

exists and its minimum and maximum eigenvalues are bounded below and above by constant numbers and , respectively.

, for and some positive integer .

The risks of the maximumlikelihood estimators satisfy


for all for some constant .

For any , for some constant .

as .
The first three assumptions are analogous to the assumptions made in the Gaussian model, and assumption (A4) is a mild regularity assumption. As shown by the following lemma, assumptions (A5)(A7) are sufficient conditions for the penalized estimator to be close to the maximumlikelihood estimator. These assumptions are analogous to the sufficient conditions used in the Gaussian model. They are stated explicitly here since they are required in parts of the efficiency proof.
Under (A5)(A7),
where is a vector where for all such that and is equal to otherwise.
The proof is given in Appendix B. The next theorem establishes the efficiency of .
Theorem 3.1.
Assuming as , (A1)(A7) and the regularity conditions (R1)(R6), the regularization parameter, , selected by minimizing yields an asymptotically loss efficient estimator, .
The proof is given in Appendix B.
4 Simulation Studies
In this section we study the finite sample performance of the model selection procedures when the true model is not included in the set of candidate models.
In all of the examples, the results are based on 1000 realizations of samples with and , and the selection procedures are evaluated based on their loss efficiency, loss, and the variability of the selected number of nonzero coefficients. For each realization, if we let denote the regularization parameter selected by a given selection procedure, then the loss efficiency is computed as
where is the
loss in the linear regression examples and is the KL loss in the GLM examples. For comparison, we also include results for the (infeasible) “Optimal” procedure, which selects the tuning parameter over the regularization path that produces the minimum loss for each realization and report the loss (“Min.Loss”) achieved by this procedure.
4.1 Linear Regression
In this section we study the finite sample performance of the model selection procedures discussed in Section 2.2. The first set of simulations considers a trigonometric regression where the candidate models are in the neighborhood of the true model but never include the true model. This example is in line with the framework considered by Shibata (1980) and Hurvich and Tsai (1991). The second set of simulations look at an example where there is an omitted predictor. For example, the researcher may have access to some of the relevant predictors but may be missing others. This is the setting that was considered by ZLT.
4.1.1 Choice of Penalty Function
We consider two common choices for the penalty function. The first is the Smoothly Clipped Absolute Deviation (SCAD) penalty function proposed by Fan and Li (2001). This penalty function is defined by
for some and . Fan and Li (2001) recommended setting the second tuning parameter in the SCAD penalty function, , equal to 3.7 and this is commonly done in practice; however, doing so will not necessarily guarantee that the SCAD objective function is convex and can result in convergence to local, but nonglobal, minima. As a result, in addition to studying the performance of SCAD with (SCAD, 3.7), we study the performance of SCAD where (SCAD) where is the minimum eigenvalue of . The latter choice will force the objective function to be convex (Breheny and Huang, 2011).
The wide use of SCAD is mainly due to the fact that it satisfies the “oracle property.” This means that, assuming that the true model is in the set of candidate models and subject to certain regularity assumptions, there exists a sequence such that if and then with probability tending to one the SCADestimated regression based on the full model will correctly zero out any zero coefficients and have the same asymptotic distribution as the least squares regression based on the correct model. This result was proven originally for fixed by Fan and Li (2001) and was extended to the case where but by Fan and Peng (2004). These results are for an unknown deterministic sequence that needs to be estimated in practice.
The second penalty function that we study is the Lasso proposed by Tibshirani (1996). The Lasso penalty is the norm of the coefficients. Necessary and sufficient conditions have been established for the Lasso to perform consistent model selection (Zhao and Yu, 2006), but in general the Lasso produces biased estimates and does not satisfy the oracle property (Zou, 2006). However, in the nontrue model world, the oracle property has no meaning, since there is no true model. Further, even in the true model world, the oracle property is an asymptotic property.
It is important to note that although ZLT only studied nonconcave penalty functions, if the nonzero estimated coefficients, , satisfy a relationship of the form
with probability tending to 1 and (A4) is satisfied, then the efficiency proofs will hold for any penalty function. In the above, are the elements of that correspond to . In particular, based on Lemma 2 of Zou et al. (2007), the Lasso satisfies this relationship and the same sufficient conditions provided by ZLT for (A4) can be used. Therefore, the efficiency proofs will hold for the Lasso, so it is interesting to compare the performance of the two penalty functions.
The Lasso regressions are fit using the R lars package (Hastie and Efron, 2011) and the SCAD regressions are fit using the R ncvreg package (Breheny and Huang, 2011). The lars package computes the entire regularization path for the Lasso and for SCAD the models are fit over a grid of 200 values from to , where the first 100 values of are fit on a logscale and the last 100 values of are equally spaced. Breheny and Huang (2011) considered a grid of 100 values in their simulation studies. We have chosen a grid that is twice as fine in order to remain closer to the theoretical assumption that all possible values of are considered. In all simulations, is specified so that all of the estimated coefficients are zero and is chosen to effectively produce the least squares estimate on the full model.
4.1.2 Exponential model
Here we consider a trigonometric example based on an example studied in Hurvich and Tsai (1991). The true model is the model described as
for , where . The estimated models are SCAD and Lasso penalized regressions where the matrix of predictors, , is a matrix with components defined by
and,
for and . The maximum number of predictors is allowed to vary by letting the dimension . It is shown in the appendix that for this example for some positive constant . Therefore, by Lemma 2.1, assumption (A3) will hold so long as . In the simulations we take , and for comparison we also consider and . Note that examining close to
allows for the study of highdimensional data problems, and is in the spirit of simulations performed in
Tibshirani (1996) and Zou and Hastie (2005). Since the predictor variables are orthogonal in this example, setting for SCAD satisfies the convexity constraint for all values of .As in Hurvich and Tsai (1991), we examine both and , but the patterns for the two error variances are similar so only the results for are reported. The median loss efficiency is presented in Table 1 for both SCAD and Lasso. For all values of , the median loss efficiency of and tend to one as the sample size increases, while the median loss efficiency of does not show signs of convergence. These patterns are consistent with the theoretical efficiency results. When the number of predictor variables is small relative to the sample size, the loss efficiency of also tends to one; however, as the number of candidate predictors is increased, the performance of deteriorates. Figure 1 displays boxplots of the selected number of nonzero coefficients when , , and . From this plot we see that often selects a model that is close to the full model when is large. As the sample size is increased the full model becomes less desirable and suffers as a result. For SCAD, appears to suffer from a similar problem, but to a lesser extent than . The difference in performance for varying values of suggests that the good asymptotic performance of and is strongly dependent on the fact that and these selectors may not perform well in finite samples when this ratio is close to 1.
Overall, the sensitivity to the value of clearly hurts the performance of and can also negatively impact the performance of and . The impact on the latter two is more noticeable when looking at SCAD, but in both cases the extreme variability in the size of the selected model is undesirable. As a result, we recommend the use of or 10fold , which are less sensitive to the closeness of to .
Median Loss Efficiency  
SCAD  Lasso  
Info. Crit.  n  c=.3  c=.5  c=.8  c=.98  c=.3  c=.5  c=.8  c=.98 
10fold CV  100  1.00  1.05  1.07  1.08  1.00  1.01  1.05  1.12 
200  1.00  1.03  1.06  1.05  1.00  1.01  1.03  1.07  
400  1.00  1.03  1.03  1.04  1.00  1.01  1.02  1.04  
100  1.00  1.04  1.18  2.43  1.00  1.01  1.07  2.13  
200  1.01  1.02  1.20  3.08  1.00  1.01  1.06  2.57  
400  1.00  1.02  1.23  4.05  1.00  1.01  1.05  3.29  
100  1.00  1.04  1.09  1.13  1.00  1.02  1.10  1.21  
200  1.01  1.03  1.07  1.08  1.00  1.01  1.06  1.11  
400  1.00  1.02  1.05  1.06  1.00  1.01  1.04  1.08  
100  1.00  1.07  1.32  1.64  1.00  1.05  1.60  1.64  
200  1.02  1.06  1.47  1.51  1.00  1.06  1.74  1.62  
400  1.01  1.07  1.60  1.51  1.00  1.08  1.80  1.60  
100  1.00  1.04  1.10  1.22  1.00  1.01  1.05  1.15  
200  1.01  1.02  1.09  1.15  1.00  1.01  1.03  1.09  
400  1.00  1.02  1.08  1.09  1.00  1.01  1.03  1.05  
100  1.00  1.04  1.10  1.69  1.00  1.01  1.06  1.16  
200  1.01  1.02  1.10  1.73  1.00  1.01  1.04  1.09  
400  1.00  1.02  1.08  1.82  1.00  1.01  1.03  1.05 
Figure 2 presents boxplots of the loss for the 1000 realizations when when and . From this we can compare the optimal performance of SCAD and the Lasso. Based on minimum loss, the predictive accuracies of the two methods are similar. This reinforces that the existence of an oracle property is not relevant in the nontrue model world, and an estimator that does not possess the oracle property can still be effective from a predictive point of view.
4.1.3 Omitted Predictor
Here we study an omitted predictor example similar to example 2 in ZLT. The true model is defined as
where for and . We let be a matrix of predictors where the are simulated from a multivariate normal distribution with mean 0 and variancecovariance matrix where for and . In the simulations is simulated once and is used for every simulation run in order to resemble a fixed setting. The estimated models are SCAD and Lasso penalized regressions based on the first observations of except with the column removed so that the true model is never included in the set of candidate models. In order to compare predictive performance, we treat the remaining observations of as a holdout sample and use it to compute the loss for each estimated model.
In both examples the number of superfluous variables included in the candidate models is allowed to vary by letting the dimension . Under deterministic , it is shown in the supplementary material that for some positive constant if the excluded predictor is orthogonal to the included predictors. By Lemma 2.1, assumption (A3) will then hold if . This suggests that when the excluded predictor is uncorrelated or only moderately correlated with the included predictors it is reasonable to compare and .
In this example setting will not satisfy the convexity constraint for all values of . Therefore, we further compare the case where (SCAD, ) to the case where (SCAD).
The patterns for the two error variances and two values of are similar so only the results for and are reported. We first consider Figure 3, which presents boxplots comparing the three estimators based on loss when . From these plots it is immediately clear that all of the information criteria perform better when is allowed to be datadependent, while 10fold performs well regardless of the choice of . One possible explanation for this is that all of the information criteria under consideration were derived for use in classical least squares regression so they should perform well assuming that the estimated models are close to the corresponding OLS models. When the second tuning parameter of SCAD is fixed at 3.7, the objective function is not necessarily convex so the SCADestimated models may be very far from the OLS models. On the other hand, 10fold CV is a general model selection procedure that should work in a variety of settings. In general, we recommend using a datadependent choice of since it requires little additional cost and can greatly improve the performance of all of the information criteria.
Focusing only on the datadependent choice of , we see that the performance of the model selection procedures is similar for both SCAD and Lasso when and when , but that the performance of SCAD is noticeably worse when . A possible explanation for this is that when is small, the performance of the SCAD estimators is more sensitive to the choice of the second tuning parameter. Although taking guarantees that the penalized loss function is convex, it may not be the optimal choice for this parameter and more investigation into the choice of this parameter is needed. Of course, this implies an advantage of Lasso over SCAD, since it does not require the choice of this second parameter.
Comparing the model selection procedures, we again see that , , and are sensitive to the number of predictor variables while and 10fold maintain good performance. The boxplots of the selected number of nonzero coefficients are omitted since the patterns are similar to those seen in the exponential model. In Figure 3 it is clear that this sensitivity to the value of impacts the performance of the model selection procedures, and as a result 10fold and outperform the other procedures. 10fold outperforms in some scenarios, but, in general, the performance of the two methods appears to be comparable.
. In order to make it easier to compare the procedures, the limits of the vertical axis are specified so that all the boxes and whiskers appear but some of the outliers are not shown.
In order to study the asymptotic behavior of the selection procedures, Table 2 presents the median loss efficiencies. With the exception of SCAD with , the loss efficiencies of , , and tend to one, while the loss efficiency of does not show signs of convergence. Also, the results again show that performs poorly when the number of predictor variables is large relative to the sample size. For SCAD with , the loss efficiency of the efficient methods do not show signs of converging to one, which further suggests that the second tuning parameter may not be optimally selected. Overall, the results corroborate the theoretical findings, but reinforce that the finite sample performance of asymptotically equivalent methods may vary greatly.
Median Loss Efficiency  
SCAD  SCAD, a=3.7  Lasso  
Info. Crit.  n  c=.5  c=.8  c=.98  c=.5  c=.8  c=.98  c=.5  c=.8  c=.98 
10fold CV  100  1.32  1.10  1.09  1.72  1.23  1.20  1.08  1.09  1.08 
200  1.19  1.07  1.07  1.35  1.08  1.10  1.05  1.06  1.05  
400  1.14  1.05  1.05  1.26  1.02  1.04  1.04  1.04  1.04  
100  1.49  1.44  41.44  2.78  2.57  37.24  1.08  1.19  37.64  
200  1.57  1.24  51.80  3.03  3.30  59.94  1.06  1.12  49.77  
400  1.84  1.11  67.73  4.13  3.16  76.07  1.04  1.07  64.94  
100  1.36  1.13  1.10  2.19  1.45  4.27  1.07  1.09  1.08  
200  1.41  1.08  1.07  2.45  1.27  10.10  1.06  1.07  1.06  
400  1.68  1.06  1.05  3.31  1.10  17.28  1.04  1.04  1.05  
100  1.12  1.26  1.40  1.26  1.41  1.62  1.11  1.24  1.31  
200  1.07  1.39  1.40  1.13  1.31  1.38  1.21  1.34  1.33  
400  1.05  1.31  1.32  1.07  1.16  1.25  1.21  1.28  1.27  
100  1.42  1.17  1.22  2.40  1.65  3.04  1.08  1.12  1.24  
200  1.46  1.10  1.16  2.69  1.42  5.27  1.06  1.08  1.14  
400  1.75  1.07  1.08  3.75  1.14  10.93  1.04  1.05  1.08  
100  1.43  1.20  1.13  2.49  2.01  14.24  1.07  1.11  1.12  
200  1.48  1.11  1.10  2.75  2.10  25.71  1.06  1.08  1.09  
400  1.78  1.07  1.06  3.86  1.26  35.72  1.04  1.05  1.06 
4.2 Poisson Regression
In this section we present simulation results for GLMs with no dispersion parameter. For GLMs, it is less clear how to handle the second tuning parameter for SCAD. Breheny and Huang (2011) recommended using an adaptive rescaling technique, but it is unclear how such a procedure will impact the performance of the model selection procedures and initial simulations for Bernoulli data resulted in convergence issues. As a result we only study the Lasso in this section. The lars package is only designed for linear regression, so we instead work with the R glmpath package (Park and Hastie, 2011), which fits the entire regularization path for the Lasso for GLMs.
We consider a trigonometric example based on an example studied in Hurvich and Tsai (1991). We take for and simulate
from a Poisson distribution with
. The estimated models are Lasso penalized Poisson regressions where the matrix of predictors, , is a matrix with components defined as in the exponential model. Similar to before, we vary the maximum number of predictors by letting the dimension and we compare , and . The case with is omitted due to convergence problems with the package.Although was originally derived for linear regression, its use is commonly recommended in a more general setting when the number of predictor variables is large relative to the sample size (Burnham and Anderson (2002), p. 66). We therefore compare the performance of to 10fold , and where
and
Table 3 reports the median KL loss efficiencies over the 1000 simulations. In all three cases, , , and 10fold show signs of converging to one and have comparable performance, whereas performs noticeably worse and does not show signs of convergence. Figure 4 presents boxplots of the selected number of nonzero coefficients. This figure suggests that the poor performance of is due to its tendency to select models that are too sparse. In comparison, the other procedures select models with dimension closer to the optimal dimension. Overall, these results are consistent with the theoretical findings.
Median Loss Efficiency  
Lasso  
Info. Crit.  n  c=.3  c=.5  c=.8 
10fold CV  100  1.08  1.30  1.17 
200  1.00  1.19  1.17  
400  1.00  1.08  1.09  
100  1.01  1.19  1.15  
200  1.01  1.13  1.10  
400  1.01  1.08  1.08  
100  1.02  1.18  1.10  
200  1.01  1.13  1.07  
400  1.01  1.08  1.06  
100  1.38  1.38  1.14  
200  1.48  1.63  1.27  
400  1.30  1.85  1.45 
5 Analysis of a Real Data Set
We now consider the transaction data set from Sela and Simonoff (2012) in order to compare the candidate models chosen by the regularization parameter selectors when applied to a real world data set. The data contains transactions for thirdparty sellers on Amazon Web Services and the goal is to predict the prices at which software titles are sold based on the characteristics of the competing sellers. The target variable is the price premium that a seller can command (the difference between the price at which the good is sold and the average price of all of the competing goods in the marketplace). There are 24 potential predictors which include the seller’s reputation (the total number of comments and the number of positive and negative comments received from buyers over different time periods), the length of time that the seller has been in the marketplace, the number of competitors, the quality of competing goods in the marketplace, the average reputation of the competitors, and the average prices of the competing goods. The data set contains 100 observations.
Table 4 reports the results for the information criteria as well as 10fold based on two different runs (and hence two different random divisions of the data), which are referred to as 10fold (1) and 10fold (2). Only six predictor variables were ever selected so the remaining variables are omitted from the table. It is clear that the variables selected are heavily reliant on the selection procedure and the penalty function chosen. In particular, there is a noticeable difference between the variables selected by and , and in all three cases selected a model with no predictors, suggesting that it may be selecting an underfitted model. If we approach this problem from a predictive point of view, we know that there is little advantage to using SCAD over the Lasso, but that the choice of the second tuning parameter can greatly impact the performance of the former. Therefore, we recommend focusing on the Lasso. From the simulations we know that 10fold maintains good performance in a variety of settings. However, it is 10 times more expensive to implement than using an information criterion, the asymptotic properties of 10fold are not fully understood in this context, and the randomness involved in the procedure makes it difficult for data analysts to reproduce results. In the case of the Lasso, this last point is reinforced by the change in the selected variables between the two runs of 10fold , as in the first run four nonzero coefficients were estimated, while in the second run none were. We recommend proceeding using as the selector of the tuning parameter for the Lasso as an alternative that avoids these issues.
Ave. Comp.  Ave. Comp.  Ave. Comp.  Seller  Negative  Negative  
Price  Condition  Rating  Condition  Comments  Comments  
Selector  (30 days)  (Lifetime)  
SCAD  
10fold CV (1)  X  
10fold CV (2)  X  
X  X  X  X  X  
X  
X  
X  X  X  X  X  
SCAD ()  
10fold CV (1)  X  
10fold CV (2)  X  
X  X  X  X  X  X  
X  X  X  X  X  X  
X  X  X  X  X  X  
X  X  X  X  X  X  
LASSO  
10fold CV (1)  X  X  X  X  
10fold CV (2)  
X  X  X  X  X  
X  
X  
X 
6 Concluding Remarks
This paper studied the asymptotic and finite sample performance of classical model selection procedures in the context of penalized likelihood estimators without the assumption that the true model is included amongst the candidate models. We proved that , , , and are efficient selectors of the regularization parameter for regularized regression, and the numerical studies for regularized regression yielded several interesting observations. As anticipated, we found that is outperformed by the efficient model selection procedures and demonstrated that , , , and are all sensitive to the number of predictor variables that are included in the full model and that their performance can suffer as a result. In light of this issue we recommend that researchers use a method that is insensitive to the number of variables included in the model. From the simulations, 10fold has the best overall performance. However, the discussion in Section 5 noted some of the disadvantages of this method including computational cost and variable results due to the inherent randomness of the procedure. As an alternative, data analysts can consider using , which was shown here to be an efficient selection procedure for the tuning parameter, and which the simulations suggest has comparable performance to that of 10fold . Lastly, the simulations suggest that there is no clear advantage to using SCAD in a world where the “oracle property” does not apply. Combining this with the facts that the Lasso can be fitted using the efficient ‘Lars’ algorithm and does not involve a second tuning parameter that can greatly impact results, researchers may prefer to use the Lasso if they feel that they are in the nontrue model world.
To further generalize our results, we also proved that is an efficient selector of the regularization parameter for regularized GLMs with no dispersion parameter and used numerical studies to compare its performance to that of , and 10fold . Again, the performance of was noticeably worse than the other procedures, and the performances of , and 10fold were comparable to each other, supporting our recommendation for the use of . Extending these results to GLMs with an unknown dispersion parameter is an interesting open problem. In this setting it is necessary to work with extended quasilikelihood methods. Although model selection criteria such as have been proposed in such settings as (Hurvich and Tsai, 1995), the extended quasilikelihood is not a true likelihood so the results of White (1982) and Nishii (1988) do not apply. Investigations into the properties of model selection procedures in this setting is an area for future research.
As a final remark, this paper dealt with the case when , and the theoretical results cannot be directly extended to the case when converges to something other than zero. The latter setting has received a great deal of attention in recent literature (in particular ) and is an area for future investigation.
Appendix Appendix A
Proof of Lemma 2.2.
By definition
Comments
There are no comments yet.