DeepAI AI Chat
Log In Sign Up

Does generalization performance of l^q regularization learning depend on q? A negative example

by   Shaobo Lin, et al.

l^q-regularization has been demonstrated to be an attractive technique in machine learning and statistical modeling. It attempts to improve the generalization (prediction) capability of a machine (model) through appropriately shrinking its coefficients. The shape of a l^q estimator differs in varying choices of the regularization order q. In particular, l^1 leads to the LASSO estimate, while l^2 corresponds to the smooth ridge regression. This makes the order q a potential tuning parameter in applications. To facilitate the use of l^q-regularization, we intend to seek for a modeling strategy where an elaborative selection on q is avoidable. In this spirit, we place our investigation within a general framework of l^q-regularized kernel learning under a sample dependent hypothesis space (SDHS). For a designated class of kernel functions, we show that all l^q estimators for 0< q < ∞ attain similar generalization error bounds. These estimated bounds are almost optimal in the sense that up to a logarithmic factor, the upper and lower bounds are asymptotically identical. This finding tentatively reveals that, in some modeling contexts, the choice of q might not have a strong impact in terms of the generalization capability. From this perspective, q can be arbitrarily specified, or specified merely by other no generalization criteria like smoothness, computational complexity, sparsity, etc..


Learning rates of l^q coefficient regularization learning with Gaussian kernel

Regularization is a well recognized powerful strategy to improve the per...

Nonparametric regression using needlet kernels for spherical data

Needlets have been recognized as state-of-the-art tools to tackle spheri...

Benign overfitting in ridge regression

Classical learning theory suggests that strong regularization is needed ...

Regularization Path of Cross-Validation Error Lower Bounds

Careful tuning of a regularization parameter is indispensable in many ma...

Provably tuning the ElasticNet across instances

An important unresolved challenge in the theory of regularization is to ...

Takeuchi's Information Criteria as a form of Regularization

Takeuchi's Information Criteria (TIC) is a linearization of maximum like...

Empirical Hypothesis Space Reduction

Selecting appropriate regularization coefficients is critical to perform...