Convergence rates for the stochastic gradient descent method for non-convex objective functions

04/02/2019
by   Benjamin Fehrman, et al.
0

We prove the local convergence to minima and estimates on the rate of convergence for the stochastic gradient descent method in the case of not necessarily globally convex nor contracting objective functions. In particular, the results are applicable to simple objective functions arising in machine learning.

READ FULL TEXT
research
05/21/2018

On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes

Stochastic gradient descent is the method of choice for large scale opti...
research
10/09/2018

Characterization of Convex Objective Functions and Optimal Expected Convergence Rates for SGD

We study Stochastic Gradient Descent (SGD) with diminishing step sizes f...
research
02/03/2019

Stochastic Gradient Descent for Nonconvex Learning without Bounded Gradient Assumptions

Stochastic gradient descent (SGD) is a popular and efficient method with...
research
02/07/2023

Convergence rates for momentum stochastic gradient descent with noise of machine learning type

We consider the momentum stochastic gradient descent scheme (MSGD) and i...
research
04/18/2023

Convergence of stochastic gradient descent under a local Lajasiewicz condition for deep neural networks

We extend the global convergence result of Chatterjee <cit.> by consider...
research
03/21/2022

A Local Convergence Theory for the Stochastic Gradient Descent Method in Non-Convex Optimization With Non-isolated Local Minima

Non-convex loss functions arise frequently in modern machine learning, a...
research
06/17/2021

Sub-linear convergence of a tamed stochastic gradient descent method in Hilbert space

In this paper, we introduce the tamed stochastic gradient descent method...

Please sign up or login with your details

Forgot password? Click here to reset