Distributed Gradient Descent for Functional Learning

05/12/2023
by   Zhan Yu, et al.
0

In recent years, different types of distributed learning schemes have received increasing attention for their strong advantages in handling large-scale data information. In the information era, to face the big data challenges which stem from functional data analysis very recently, we propose a novel distributed gradient descent functional learning (DGDFL) algorithm to tackle functional data across numerous local machines (processors) in the framework of reproducing kernel Hilbert space. Based on integral operator approaches, we provide the first theoretical understanding of the DGDFL algorithm in many different aspects in the literature. On the way of understanding DGDFL, firstly, a data-based gradient descent functional learning (GDFL) algorithm associated with a single-machine model is proposed and comprehensively studied. Under mild conditions, confidence-based optimal learning rates of DGDFL are obtained without the saturation boundary on the regularity index suffered in previous works in functional regression. We further provide a semi-supervised DGDFL approach to weaken the restriction on the maximal number of local machines to ensure optimal rates. To our best knowledge, the DGDFL provides the first distributed iterative training approach to functional learning and enriches the stage of functional data analysis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2020

Estimates on Learning Rates for Multi-Penalty Distribution Regression

This paper is concerned with functional learning by utilizing two-stage ...
research
06/08/2022

Unified RKHS Methodology and Analysis for Functional Linear and Single-Index Models

Functional linear and single-index models are core regression methods in...
research
08/26/2022

Coefficient-based Regularized Distribution Regression

In this paper, we consider the coefficient-based regularized distributio...
research
09/25/2022

Capacity dependent analysis for functional online learning algorithms

This article provides convergence analysis of online stochastic gradient...
research
01/12/2013

Functional Regularized Least Squares Classi cation with Operator-valued Kernels

Although operator-valued kernels have recently received increasing inter...
research
02/07/2020

On the Effectiveness of Richardson Extrapolation in Machine Learning

Richardson extrapolation is a classical technique from numerical analysi...
research
10/14/2022

Latent process models for functional network data

Network data are often sampled with auxiliary information or collected t...

Please sign up or login with your details

Forgot password? Click here to reset