On the Saturation Phenomenon of Stochastic Gradient Descent for Linear Inverse Problems

10/21/2020
by   Bangti Jin, et al.
0

Stochastic gradient descent (SGD) is a promising method for solving large-scale inverse problems, due to its excellent scalability with respect to data size. The current mathematical theory in the lens of regularization theory predicts that SGD with a polynomially decaying stepsize schedule may suffer from an undesirable saturation phenomenon, i.e., the convergence rate does not further improve with the solution regularity index if it is beyond a certain range. In this work, we present a refined convergence rate analysis of SGD, and prove that saturation actually does not occur if the initial stepsize of the schedule is sufficiently small. Several numerical experiments are provided to complement the analysis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2021

An Analysis of Stochastic Variance Reduced Gradient for Linear Inverse Problems

Stochastic variance reduced gradient (SVRG) is a popular variance reduct...
research
04/30/2020

On the Discrepancy Principle for Stochastic Gradient Descent

Stochastic gradient descent (SGD) is a promising numerical method for so...
research
09/29/2022

Statistical Learning and Inverse Problems: An Stochastic Gradient Approach

Inverse problems are paramount in Science and Engineering. In this paper...
research
03/09/2015

Learning Co-Sparse Analysis Operators with Separable Structures

In the co-sparse analysis model a set of filters is applied to a signal ...
research
10/22/2019

The Practicality of Stochastic Optimization in Imaging Inverse Problems

In this work we investigate the practicality of stochastic gradient desc...
research
07/09/2020

Stochastic gradient descent for linear least squares problems with partially observed data

We propose a novel stochastic gradient descent method for solving linear...

Please sign up or login with your details

Forgot password? Click here to reset