Fast and Strong Convergence of Online Learning Algorithms

10/10/2017
by   Zheng-Chu Guo, et al.
0

In this paper, we study the online learning algorithm without explicit regularization terms. This algorithm is essentially a stochastic gradient descent scheme in a reproducing kernel Hilbert space (RKHS). The polynomially decaying step size in each iteration can play a role of regularization to ensure the generalization ability of online learning algorithm. We develop a novel capacity dependent analysis on the performance of the last iterate of online learning algorithm. The contribution of this paper is two-fold. First, our nice analysis can lead to the convergence rate in the standard mean square distance which is the best so far. Second, we establish, for the first time, the strong convergence of the last iterate with polynomially decaying step sizes in the RKHS norm. We demonstrate that the theoretical analysis established in this paper fully exploits the fine structure of the underlying RKHS, and thus can lead to sharp error estimates of online learning algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/20/2023

Optimality of Robust Online Learning

In this paper, we study an online learning algorithm with a robust loss ...
research
11/24/2022

Online Regularized Learning Algorithm for Functional Data

In recent years, functional linear models have attracted growing attenti...
research
07/03/2017

Generalization Properties of Doubly Online Learning Algorithms

Doubly online learning algorithms are scalable kernel methods that perfo...
research
07/23/2020

Online Robust and Adaptive Learning from Data Streams

In online learning from non-stationary data streams, it is both necessar...
research
03/20/2023

Random Inverse Problems Over Graphs: Decentralized Online Learning

We establish a framework of random inverse problems with real-time obser...
research
09/25/2022

Capacity dependent analysis for functional online learning algorithms

This article provides convergence analysis of online stochastic gradient...
research
03/02/2015

Unregularized Online Learning Algorithms with General Loss Functions

In this paper, we consider unregularized online learning algorithms in a...

Please sign up or login with your details

Forgot password? Click here to reset