Kernel Ridge Regression via Partitioning

08/05/2016
by   Rashish Tandon, et al.
0

In this paper, we investigate a divide and conquer approach to Kernel Ridge Regression (KRR). Given n samples, the division step involves separating the points based on some underlying disjoint partition of the input space (possibly via clustering), and then computing a KRR estimate for each partition. The conquering step is simple: for each partition, we only consider its own local estimate for prediction. We establish conditions under which we can give generalization bounds for this estimator, as well as achieve optimal minimax rates. We also show that the approximation error component of the generalization error is lesser than when a single KRR estimate is fit on the data: thus providing both statistical and computational advantages over a single KRR estimate over the entire data (or an averaging over random partitions as in other recent work, [30]). Lastly, we provide experimental validation for our proposed estimator and our assumptions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2013

Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates

We establish optimal convergence rates for a decomposition-based scalabl...
research
05/24/2021

Uncertainty quantification for distributed regression

The ever-growing size of the datasets renders well-studied learning tech...
research
07/13/2021

Oversampling Divide-and-conquer for Response-skewed Kernel Ridge Regression

The divide-and-conquer method has been widely used for estimating large-...
research
07/16/2021

Intrinsic Dimension Adaptive Partitioning for Kernel Methods

We prove minimax optimal learning rates for kernel ridge regression, res...
research
06/23/2021

ParK: Sound and Efficient Kernel Ridge Regression by Feature Space Partitions

We introduce ParK, a new large-scale solver for kernel ridge regression....
research
05/27/2021

Lattice partition recovery with dyadic CART

We study piece-wise constant signals corrupted by additive Gaussian nois...
research
10/24/2016

Parallelizing Spectral Algorithms for Kernel Learning

We consider a distributed learning approach in supervised learning for a...

Please sign up or login with your details

Forgot password? Click here to reset