On the use of information criteria for subset selection in least squares regression

11/22/2019
by   Sen Tian, et al.
0

Least squares (LS) based subset selection methods are popular in linear regression modeling when the number of predictors is less than the number of observations. Best subset selection (BS) is known to be NP hard and has a computational cost that grows exponentially with the number of predictors. Forward stepwise selection (FS) is a greedy heuristic for BS. Both methods rely on cross-validation (CV) in order to select the subset size k, which requires fitting the procedures multiple times and results in a selected k that is random across replications. Compared to CV, information criteria only require fitting the procedures once, and we show that for LS-based methods they can result in better predictive performance while providing a non-random choice of k. However, information criteria require knowledge of the effective degrees of freedom for the fitting procedure, which is generally not available analytically for complex methods. In this paper, we propose a novel LS-based method, the best orthogonalized subset selection (BOSS) method, which performs BS upon an orthogonalized basis of ordered predictors. Assuming orthogonal predictors, we build a connection between BS and its Lagrangian formulation (i.e., minimization of the residual sum of squares plus the product of a regularization parameter and k), and based on this connection introduce a heuristic degrees of freedom (hdf) for BOSS that can be estimated via an analytically-based expression. We show in both simulations and real data analysis that BOSS using the Kullback-Leibler based information criterion AICc-hdf has the strongest performance of all of the LS-based methods considered and is competitive with regularization methods, with the computational effort of a single ordinary LS fit.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/25/2023

Degrees of Freedom: Search Cost and Self-consistency

Model degrees of freedom () is a fundamental concept in statistics becau...
research
09/12/2011

Efficient algorithm to select tuning parameters in sparse regression modeling with regularization

In sparse regression modeling via regularization such as the lasso, it i...
research
09/21/2020

Selection of Regression Models under Linear Restrictions for Fixed and Random Designs

Many important modeling tasks in linear regression, including variable s...
research
04/14/2022

On Measuring Model Complexity in Heteroscedastic Linear Regression

Heteroscedasticity is common in real world applications and is often han...
research
01/15/2021

Fitting very flexible models: Linear regression with large numbers of parameters

There are many uses for linear fitting; the context here is interpolatio...
research
07/06/2022

Degrees of Freedom and Information Criteria for the Synthetic Control Method

We provide an analytical characterization of the model flexibility of th...
research
09/23/2021

High-dimensional regression with potential prior information on variable importance

There are a variety of settings where vague prior information may be ava...

Please sign up or login with your details

Forgot password? Click here to reset