Convergence in quadratic mean of averaged stochastic gradient algorithms without strong convexity nor bounded gradient

Online averaged stochastic gradient algorithms are more and more studied since (i) they can deal quickly with large sample taking values in high dimensional spaces, (ii) they enable to treat data sequentially, (iii) they are known to be asymptotically efficient. In this paper, we focus on giving explicit bounds of the quadratic mean error of the estimates, and this, with very weak assumptions, i.e without supposing that the function we would like to minimize is strongly convex or admits a bounded gradient.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2017

On the rates of convergence of Parallelized Averaged Stochastic Gradient Algorithms

The growing interest for high dimensional and functional data analysis l...
research
03/01/2023

Non asymptotic analysis of Adaptive stochastic gradient algorithms and applications

In stochastic optimization, a common tool to deal sequentially with larg...
research
11/03/2017

Analysis of Approximate Stochastic Gradient Using Quadratic Constraints and Sequential Semidefinite Programs

We present convergence rate analysis for the approximate stochastic grad...
research
08/23/2020

Multi-kernel Passive Stochastic Gradient Algorithms

This paper develops a novel passive stochastic gradient algorithm. In pa...
research
01/21/2011

A fast and recursive algorithm for clustering large datasets with k-medians

Clustering with fast algorithms large samples of high dimensional data i...
research
09/23/2019

Necessary and Sufficient Conditions for Adaptive, Mirror, and Standard Gradient Methods

We study the impact of the constraint set and gradient geometry on the c...
research
05/24/2021

Robust learning with anytime-guaranteed feedback

Under data distributions which may be heavy-tailed, many stochastic grad...

Please sign up or login with your details

Forgot password? Click here to reset