On the Convergence of mSGD and AdaGrad for Stochastic Optimization

01/26/2022
by   Ruinan Jin, et al.
0

As one of the most fundamental stochastic optimization algorithms, stochastic gradient descent (SGD) has been intensively developed and extensively applied in machine learning in the past decade. There have been some modified SGD-type algorithms, which outperform the SGD in many competitions and applications in terms of convergence rate and accuracy, such as momentum-based SGD (mSGD) and adaptive gradient algorithm (AdaGrad). Despite these empirical successes, the theoretical properties of these algorithms have not been well established due to technical difficulties. With this motivation, we focus on convergence analysis of mSGD and AdaGrad for any smooth (possibly non-convex) loss functions in stochastic optimization. First, we prove that the iterates of mSGD are asymptotically convergent to a connected set of stationary points with probability one, which is more general than existing works on subsequence convergence or convergence of time averages. Moreover, we prove that the loss function of mSGD decays at a certain rate faster than that of SGD. In addition, we prove the iterates of AdaGrad are asymptotically convergent to a connected set of stationary points with probability one. Also, this result extends the results from the literature on subsequence convergence and the convergence of time averages. Despite the generality of the above convergence results, we have relaxed some assumptions of gradient noises, convexity of loss functions, as well as boundedness of iterates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/01/2019

Sharp Analysis for Nonconvex SGD Escaping from Saddle Points

In this paper, we prove that the simplest Stochastic Gradient Descent (S...
research
10/16/2018

Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron

Modern machine learning focuses on highly expressive models that are abl...
research
05/21/2012

Stochastic Smoothing for Nonsmooth Minimizations: Accelerating SGD by Exploiting Structure

In this work we consider the stochastic minimization of nonsmooth convex...
research
05/12/2020

Convergence of Online Adaptive and Recurrent Optimization Algorithms

We prove local convergence of several notable gradient descentalgorithms...
research
02/06/2023

Target-based Surrogates for Stochastic Optimization

We consider minimizing functions for which it is expensive to compute th...
research
02/22/2022

Convergence of online k-means

We prove asymptotic convergence for a general class of k-means algorithm...
research
07/23/2018

Particle Filtering Methods for Stochastic Optimization with Application to Large-Scale Empirical Risk Minimization

There is a recent interest in developing statistical filtering methods f...

Please sign up or login with your details

Forgot password? Click here to reset