Learning rates of l^q coefficient regularization learning with Gaussian kernel

12/19/2013
by   Shaobo Lin, et al.
0

Regularization is a well recognized powerful strategy to improve the performance of a learning machine and l^q regularization schemes with 0<q<∞ are central in use. It is known that different q leads to different properties of the deduced estimators, say, l^2 regularization leads to smooth estimators while l^1 regularization leads to sparse estimators. Then, how does the generalization capabilities of l^q regularization learning vary with q? In this paper, we study this problem in the framework of statistical learning theory and show that implementing l^q coefficient regularization schemes in the sample dependent hypothesis space associated with Gaussian kernel can attain the same almost optimal learning rates for all 0<q<∞. That is, the upper and lower bounds of learning rates for l^q regularization learning are asymptotically identical for all 0<q<∞. Our finding tentatively reveals that, in some modeling contexts, the choice of q might not have a strong impact with respect to the generalization capability. From this perspective, q can be arbitrarily specified, or specified merely by other no generalization criteria like smoothness, computational complexity, sparsity, etc..

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2013

Does generalization performance of l^q regularization learning depend on q? A negative example

l^q-regularization has been demonstrated to be an attractive technique i...
research
02/14/2015

Nonparametric regression using needlet kernels for spherical data

Needlets have been recognized as state-of-the-art tools to tackle spheri...
research
03/02/2012

Fast learning rate of multiple kernel learning: Trade-off between sparsity and smoothness

We investigate the learning rate of multiple kernel learning (MKL) with ...
research
11/11/2021

Nonconvex flexible sparsity regularization: theory and monotone numerical schemes

Flexible sparsity regularization means stably approximating sparse solut...
research
06/16/2021

Beyond Tikhonov: Faster Learning with Self-Concordant Losses via Iterative Regularization

The theory of spectral filtering is a remarkable tool to understand the ...
research
11/16/2011

Fast Learning Rate of Non-Sparse Multiple Kernel Learning and Optimal Regularization Strategies

In this paper, we give a new generalization error bound of Multiple Kern...

Please sign up or login with your details

Forgot password? Click here to reset