Distributed learning with regularized least squares

08/11/2016
by   Shao-Bo Lin, et al.
0

We study distributed learning with the least squares regularization scheme in a reproducing kernel Hilbert space (RKHS). By a divide-and-conquer approach, the algorithm partitions a data set into disjoint data subsets, applies the least squares regularization scheme to each data subset to produce an output function, and then takes an average of the individual output functions as a final global estimator or predictor. We show with error bounds in expectation in both the L^2-metric and RKHS-metric that the global output function of this distributed learning is a good approximation to the algorithm processing the whole data in one single machine. Our error bounds are sharp and stated in a general setting without any eigenfunction assumption. The analysis is achieved by a novel second order decomposition of operator differences in our integral operator approach. Even for the classical least squares regularization scheme in the RKHS associated with a general kernel, we give the best learning rate in the literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/24/2011

Reproducing Kernel Banach Spaces with the l1 Norm II: Error Analysis for Regularized Least Square Regression

A typical approach in estimating the learning rate of a regularized lear...
research
08/19/2023

An Online Multiple Kernel Parallelizable Learning Scheme

The performance of reproducing kernel Hilbert space-based methods is kno...
research
07/08/2016

Convergence rates of Kernel Conjugate Gradient for random design regression

We prove statistical rates of convergence for kernel-based least squares...
research
05/30/2018

Theoretical Bounds on MAP Estimation in Distributed Sensing Networks

The typical approach for recovery of spatially correlated signals is reg...
research
08/07/2017

Learning Theory of Distributed Regression with Bias Corrected Regularization Kernel Network

Distributed learning is an effective way to analyze big data. In distrib...
research
10/09/2013

M-Power Regularized Least Squares Regression

Regularization is used to find a solution that both fits the data and is...
research
08/03/2012

Learning Theory Approach to Minimum Error Entropy Criterion

We consider the minimum error entropy (MEE) criterion and an empirical r...

Please sign up or login with your details

Forgot password? Click here to reset