Model Selection and estimation of Multi Screen Penalty
We propose a multi-step method, called Multi Screen Penalty (MSP), to estimate high-dimensional sparse linear models. MSP uses a series of small and adaptive penalty to iteratively estimate the regression coefficients. This structure is shown to greatly improve the model selection and estimation accuracy, i.e., it precisely selects the true model when the irrepresentable condition fails; under mild regularity conditions, MSP estimator can achieve the rate √(q n /n) for the upper bound of l_2-norm error. At each step, we restrict the selection of MSP only on the reduced parameter space obtained from the last step; this makes its computational complexity is roughly comparable to Lasso. This algorithm is found to be stable and reaches to high accuracy over a range of small tuning parameters, hence deletes the cross-validation segment. Numerical comparisons show that the method works effectively both in model selection and estimation and nearly uniformly outperform others. We apply MSP and other methods to financial data. MSP is successful in assets selection and produces more stable and lower rates of fitted/predicted errors.
READ FULL TEXT