Distributed Learning with Dependent Samples

02/10/2020
by   Shao-Bo Lin, et al.
0

This paper focuses on learning rate analysis of distributed kernel ridge regression for strong mixing sequences. Using a recently developed integral operator approach and a classical covariance inequality for Banach-valued strong mixing sequences, we succeed in deriving optimal learning rate for distributed kernel ridge regression. As a byproduct, we also deduce a sufficient condition for the mixing property to guarantee the optimal learning rates for kernel ridge regression. Our results extend the applicable range of distributed learning from i.i.d. samples to non-i.i.d. sequences.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/13/2021

Nyström Regularization for Time Series Forecasting

This paper focuses on learning rate analysis of Nyström regularization w...
research
03/27/2020

Distributed Kernel Ridge Regression with Communications

This paper focuses on generalization performance analysis for distribute...
research
06/07/2019

Distributed Learning with Random Features

Distributed learning and random projections are the most common techniqu...
research
09/08/2023

Adaptive Distributed Kernel Ridge Regression: A Feasible Distributed Learning Scheme for Data Silos

Data silos, mainly caused by privacy and interoperability, significantly...
research
03/16/2022

An elementary analysis of ridge regression with random design

In this note, we provide an elementary analysis of the prediction error ...
research
06/28/2022

Target alignment in truncated kernel ridge regression

Kernel ridge regression (KRR) has recently attracted renewed interest du...
research
08/24/2022

An Improved Bernstein-type Inequality for C-Mixing-type Processes and Its Application to Kernel Smoothing

There are many processes, particularly dynamic systems, that cannot be d...

Please sign up or login with your details

Forgot password? Click here to reset