DeepAI
Log In Sign Up

On the Saturation Phenomenon of Stochastic Gradient Descent for Linear Inverse Problems

10/21/2020
by   Bangti Jin, et al.
0

Stochastic gradient descent (SGD) is a promising method for solving large-scale inverse problems, due to its excellent scalability with respect to data size. The current mathematical theory in the lens of regularization theory predicts that SGD with a polynomially decaying stepsize schedule may suffer from an undesirable saturation phenomenon, i.e., the convergence rate does not further improve with the solution regularity index if it is beyond a certain range. In this work, we present a refined convergence rate analysis of SGD, and prove that saturation actually does not occur if the initial stepsize of the schedule is sufficiently small. Several numerical experiments are provided to complement the analysis.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/10/2021

An Analysis of Stochastic Variance Reduced Gradient for Linear Inverse Problems

Stochastic variance reduced gradient (SVRG) is a popular variance reduct...
04/30/2020

On the Discrepancy Principle for Stochastic Gradient Descent

Stochastic gradient descent (SGD) is a promising numerical method for so...
09/29/2022

Statistical Learning and Inverse Problems: An Stochastic Gradient Approach

Inverse problems are paramount in Science and Engineering. In this paper...
03/09/2015

Learning Co-Sparse Analysis Operators with Separable Structures

In the co-sparse analysis model a set of filters is applied to a signal ...
05/03/2020

On the Convergence Rate of Projected Gradient Descent for a Back-Projection based Objective

Ill-posed linear inverse problems appear in many fields of imaging scien...
07/27/2020

Stochastic Gradient Descent applied to Least Squares regularizes in Sobolev spaces

We study the behavior of stochastic gradient descent applied to Ax -b _2...