DeepAI AI Chat
Log In Sign Up

An analysis of the cost of hyper-parameter selection via split-sample validation, with applications to penalized regression

by   Jean Feng, et al.

In the regression setting, given a set of hyper-parameters, a model-estimation procedure constructs a model from training data. The optimal hyper-parameters that minimize generalization error of the model are usually unknown. In practice they are often estimated using split-sample validation. Up to now, there is an open question regarding how the generalization error of the selected model grows with the number of hyper-parameters to be estimated. To answer this question, we establish finite-sample oracle inequalities for selection based on a single training/test split and based on cross-validation. We show that if the model-estimation procedures are smoothly parameterized by the hyper-parameters, the error incurred from tuning hyper-parameters shrinks at nearly a parametric rate. Hence for semi- and non-parametric model-estimation procedures with a fixed number of hyper-parameters, this additional error is negligible. For parametric model-estimation procedures, adding a hyper-parameter is roughly equivalent to adding a parameter to the model itself. In addition, we specialize these ideas for penalized regression problems with multiple penalty parameters. We establish that the fitted models are Lipschitz in the penalty parameters and thus our oracle inequalities apply. This result encourages development of regularization methods with many penalty parameters.


Gradient-based Regularization Parameter Selection for Problems with Non-smooth Penalty Functions

In high-dimensional and/or non-parametric regression problems, regulariz...

Finite-sample and asymptotic analysis of generalization ability with an application to penalized regression

In this paper, we study the performance of extremum estimators from the ...

Penalized robust estimators in logistic regression with applications to sparse models

Sparse covariates are frequent in classification and regression problems...

Unified Framework for the Adaptive Operator Selection of Discrete Parameters

We conduct an exhaustive survey of adaptive selection of operators (AOS)...

What needles do sparse neural networks find in nonlinear haystacks

Using a sparsity inducing penalty in artificial neural networks (ANNs) a...

On the discriminative power of Hyper-parameters in Cross-Validation and how to choose them

Hyper-parameters tuning is a crucial task to make a model perform at its...