M-Power Regularized Least Squares Regression

10/09/2013
by   Julien Audiffren, et al.
0

Regularization is used to find a solution that both fits the data and is sufficiently smooth, and thereby is very effective for designing and refining learning algorithms. But the influence of its exponent remains poorly understood. In particular, it is unclear how the exponent of the reproducing kernel Hilbert space (RKHS) regularization term affects the accuracy and the efficiency of kernel-based learning algorithms. Here we consider regularized least squares regression (RLSR) with an RKHS regularization raised to the power of m, where m is a variable real exponent. We design an efficient algorithm for solving the associated minimization problem, we provide a theoretical analysis of its stability, and we compare its advantage with respect to computational complexity, speed of convergence and prediction accuracy to the classical kernel ridge regression algorithm where the regularization exponent m is fixed at 2. Our results show that the m-power RLSR problem can be solved efficiently, and support the suggestion that one can use a regularization term that grows significantly slower than the standard quadratic growth in the RKHS norm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2020

Analysis of Least Squares Regularized Regression in Reproducing Kernel Krein Spaces

In this paper, we study the asymptotical properties of least squares reg...
research
03/12/2018

Optimal Rates of Sketched-regularized Algorithms for Least-Squares Regression over Hilbert Spaces

We investigate regularized algorithms combining with projection for leas...
research
05/09/2012

L2 Regularization for Learning Kernels

The choice of the kernel is critical to the success of many learning alg...
research
07/03/2017

Generalization Properties of Doubly Online Learning Algorithms

Doubly online learning algorithms are scalable kernel methods that perfo...
research
08/11/2016

Distributed learning with regularized least squares

We study distributed learning with the least squares regularization sche...
research
07/19/2023

Weighted inhomogeneous regularization for inverse problems with indirect and incomplete measurement data

Regularization promotes well-posedness in solving an inverse problem wit...
research
10/24/2016

Parallelizing Spectral Algorithms for Kernel Learning

We consider a distributed learning approach in supervised learning for a...

Please sign up or login with your details

Forgot password? Click here to reset