Reproducing Kernel Banach Spaces with the l1 Norm II: Error Analysis for Regularized Least Square Regression

01/24/2011
by   Guohui Song, et al.
0

A typical approach in estimating the learning rate of a regularized learning scheme is to bound the approximation error by the sum of the sampling error, the hypothesis error and the regularization error. Using a reproducing kernel space that satisfies the linear representer theorem brings the advantage of discarding the hypothesis error from the sum automatically. Following this direction, we illustrate how reproducing kernel Banach spaces with the l1 norm can be applied to improve the learning rate estimate of l1-regularization in machine learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/23/2011

Reproducing Kernel Banach Spaces with the l1 Norm

Targeting at sparse learning, we construct Banach spaces B of functions ...
research
08/11/2016

Distributed learning with regularized least squares

We study distributed learning with the least squares regularization sche...
research
05/26/2019

Lepskii Principle in Supervised Learning

In the setting of supervised learning using reproducing kernel methods, ...
research
11/16/2011

Fast Learning Rate of Non-Sparse Multiple Kernel Learning and Optimal Regularization Strategies

In this paper, we give a new generalization error bound of Multiple Kern...
research
10/12/2021

Regularized Step Directions in Conjugate Gradient Minimization for Machine Learning

Conjugate gradient minimization methods (CGM) and their accelerated vari...
research
01/04/2019

On Reproducing Kernel Banach Spaces: Generic Definitions and Unified Framework of Constructions

Recently, there has been emerging interest in constructing reproducing k...
research
02/09/2017

L1-regularized Reconstruction Error as Alpha Matte

Sampling-based alpha matting methods have traditionally followed the com...

Please sign up or login with your details

Forgot password? Click here to reset