A Local Convergence Theory for the Stochastic Gradient Descent Method in Non-Convex Optimization With Non-isolated Local Minima

03/21/2022
by   Taehee Ko, et al.
0

Non-convex loss functions arise frequently in modern machine learning, and for the theoretical analysis of stochastic optimization methods, the presence of non-isolated minima presents a unique challenge that has remained under-explored. In this paper, we study the local convergence of the stochastic gradient descent method to non-isolated global minima. Under mild assumptions, we estimate the probability for the iterations to stay near the minima by adopting the notion of stochastic stability. After establishing such stability, we present the lower bound complexity in terms of various error criteria for a given error tolerance ϵ and a failure probability γ.

READ FULL TEXT

page 2

page 5

research
04/02/2019

Convergence rates for the stochastic gradient descent method for non-convex objective functions

We prove the local convergence to minima and estimates on the rate of co...
research
06/17/2018

Laplacian Smoothing Gradient Descent

We propose a very simple modification of gradient descent and stochastic...
research
07/07/2023

Smoothing the Edges: A General Framework for Smooth Optimization in Sparse Regularization using Hadamard Overparametrization

This paper introduces a smooth method for (structured) sparsity in ℓ_q a...
research
12/20/2017

Statistical Inference for the Population Landscape via Moment Adjusted Stochastic Gradients

Modern statistical inference tasks often require iterative optimization ...
research
09/03/2015

Train faster, generalize better: Stability of stochastic gradient descent

We show that parametric models trained by a stochastic gradient method (...
research
06/05/2019

Global Optimality Guarantees For Policy Gradient Methods

Policy gradients methods are perhaps the most widely used class of reinf...
research
06/11/2018

Swarming for Faster Convergence in Stochastic Optimization

We study a distributed framework for stochastic optimization which is in...

Please sign up or login with your details

Forgot password? Click here to reset