Log In Sign Up

On the Saturation Phenomenon of Stochastic Gradient Descent for Linear Inverse Problems

by   Bangti Jin, et al.

Stochastic gradient descent (SGD) is a promising method for solving large-scale inverse problems, due to its excellent scalability with respect to data size. The current mathematical theory in the lens of regularization theory predicts that SGD with a polynomially decaying stepsize schedule may suffer from an undesirable saturation phenomenon, i.e., the convergence rate does not further improve with the solution regularity index if it is beyond a certain range. In this work, we present a refined convergence rate analysis of SGD, and prove that saturation actually does not occur if the initial stepsize of the schedule is sufficiently small. Several numerical experiments are provided to complement the analysis.


page 1

page 2

page 3

page 4


An Analysis of Stochastic Variance Reduced Gradient for Linear Inverse Problems

Stochastic variance reduced gradient (SVRG) is a popular variance reduct...

On the Discrepancy Principle for Stochastic Gradient Descent

Stochastic gradient descent (SGD) is a promising numerical method for so...

Statistical Learning and Inverse Problems: An Stochastic Gradient Approach

Inverse problems are paramount in Science and Engineering. In this paper...

Learning Co-Sparse Analysis Operators with Separable Structures

In the co-sparse analysis model a set of filters is applied to a signal ...

On the Convergence Rate of Projected Gradient Descent for a Back-Projection based Objective

Ill-posed linear inverse problems appear in many fields of imaging scien...

Stochastic Gradient Descent applied to Least Squares regularizes in Sobolev spaces

We study the behavior of stochastic gradient descent applied to Ax -b _2...