Does generalization performance of l^q regularization learning depend on q? A negative example

07/25/2013
by   Shaobo Lin, et al.
0

l^q-regularization has been demonstrated to be an attractive technique in machine learning and statistical modeling. It attempts to improve the generalization (prediction) capability of a machine (model) through appropriately shrinking its coefficients. The shape of a l^q estimator differs in varying choices of the regularization order q. In particular, l^1 leads to the LASSO estimate, while l^2 corresponds to the smooth ridge regression. This makes the order q a potential tuning parameter in applications. To facilitate the use of l^q-regularization, we intend to seek for a modeling strategy where an elaborative selection on q is avoidable. In this spirit, we place our investigation within a general framework of l^q-regularized kernel learning under a sample dependent hypothesis space (SDHS). For a designated class of kernel functions, we show that all l^q estimators for 0< q < ∞ attain similar generalization error bounds. These estimated bounds are almost optimal in the sense that up to a logarithmic factor, the upper and lower bounds are asymptotically identical. This finding tentatively reveals that, in some modeling contexts, the choice of q might not have a strong impact in terms of the generalization capability. From this perspective, q can be arbitrarily specified, or specified merely by other no generalization criteria like smoothness, computational complexity, sparsity, etc..

READ FULL TEXT
research
12/19/2013

Learning rates of l^q coefficient regularization learning with Gaussian kernel

Regularization is a well recognized powerful strategy to improve the per...
research
02/14/2015

Nonparametric regression using needlet kernels for spherical data

Needlets have been recognized as state-of-the-art tools to tackle spheri...
research
09/29/2020

Benign overfitting in ridge regression

Classical learning theory suggests that strong regularization is needed ...
research
02/09/2015

Regularization Path of Cross-Validation Error Lower Bounds

Careful tuning of a regularization parameter is indispensable in many ma...
research
07/20/2022

Provably tuning the ElasticNet across instances

An important unresolved challenge in the theory of regularization is to ...
research
03/13/2018

Takeuchi's Information Criteria as a form of Regularization

Takeuchi's Information Criteria (TIC) is a linearization of maximum like...
research
05/29/2016

Singular ridge regression with homoscedastic residuals: generalization error with estimated parameters

This paper characterizes the conditional distribution properties of the ...

Please sign up or login with your details

Forgot password? Click here to reset