Polynomial convergence of iterations of certain random operators in Hilbert space

02/04/2022
by   Soumyadip Ghosh, et al.
0

We study the convergence of random iterative sequence of a family of operators on infinite dimensional Hilbert spaces, which are inspired by the Stochastic Gradient Descent (SGD) algorithm in the case of the noiseless regression, as studied in [1]. We demonstrate that its polynomial convergence rate depends on the initial state, while the randomness plays a role only in the choice of the best constant factor and we close the gap between the upper and lower bounds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/22/2020

Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime

We analyze the convergence of the averaged stochastic gradient descent f...
research
06/18/2020

Stochastic Gradient Descent in Hilbert Scales: Smoothness, Preconditioning and Earlier Stopping

Stochastic Gradient Descent (SGD) has become the method of choice for so...
research
03/13/2023

Tighter Lower Bounds for Shuffling SGD: Random Permutations and Beyond

We study convergence lower bounds of without-replacement stochastic grad...
research
01/17/2020

Chebyshev Inertial Landweber Algorithm for Linear Inverse Problems

The Landweber algorithm defined on complex/real Hilbert spaces is a grad...
research
10/23/2020

Sub-linear convergence of a stochastic proximal iteration method in Hilbert space

We consider a stochastic version of the proximal point algorithm for opt...
research
08/11/2023

The Stochastic Steepest Descent Method for Robust Optimization in Banach Spaces

Stochastic gradient methods have been a popular and powerful choice of o...

Please sign up or login with your details

Forgot password? Click here to reset