Online Regularized Learning Algorithm for Functional Data

11/24/2022
by   Yuan Mao, et al.
0

In recent years, functional linear models have attracted growing attention in statistics and machine learning, with the aim of recovering the slope function or its functional predictor. This paper considers online regularized learning algorithm for functional linear models in reproducing kernel Hilbert spaces. Convergence analysis of excess prediction error and estimation error are provided with polynomially decaying step-size and constant step-size, respectively. Fast convergence rates can be derived via a capacity dependent analysis. By introducing an explicit regularization term, we uplift the saturation boundary of unregularized online learning algorithms when the step-size decays polynomially, and establish fast convergence rates of estimation error without capacity assumption. However, it remains an open problem to obtain capacity independent convergence rates for the estimation error of the unregularized online learning algorithm with decaying step-size. It also shows that convergence rates of both prediction error and estimation error with constant step-size are competitive with those in the literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/10/2017

Fast and Strong Convergence of Online Learning Algorithms

In this paper, we study the online learning algorithm without explicit r...
research
09/25/2022

Capacity dependent analysis for functional online learning algorithms

This article provides convergence analysis of online stochastic gradient...
research
04/20/2023

Optimality of Robust Online Learning

In this paper, we study an online learning algorithm with a robust loss ...
research
02/18/2018

Convergence of Online Mirror Descent Algorithms

In this paper we consider online mirror descent (OMD) algorithms, a clas...
research
03/02/2015

Unregularized Online Learning Algorithms with General Loss Functions

In this paper, we consider unregularized online learning algorithms in a...
research
06/15/2020

Tight Nonparametric Convergence Rates for Stochastic Gradient Descent under the Noiseless Linear Model

In the context of statistical supervised learning, the noiseless linear ...
research
09/04/2021

Nonmonotone Local Minimax Methods for Finding Multiple Saddle Points

In this paper, combining normalized nonmonotone search strategies with t...

Please sign up or login with your details

Forgot password? Click here to reset