Unified Convergence Analysis of Stochastic Momentum Methods for Convex and Non-convex Optimization

04/12/2016
by   Tianbao Yang, et al.
0

Recently, stochastic momentum methods have been widely adopted in training deep neural networks. However, their convergence analysis is still underexplored at the moment, in particular for non-convex optimization. This paper fills the gap between practice and theory by developing a basic convergence analysis of two stochastic momentum methods, namely stochastic heavy-ball method and the stochastic variant of Nesterov's accelerated gradient method. We hope that the basic convergence results developed in this paper can serve the reference to the convergence of stochastic momentum methods and also serve the baselines for comparison in future development of stochastic momentum methods. The novelty of convergence analysis presented in this paper is a unified framework, revealing more insights about the similarities and differences between different stochastic momentum methods and stochastic gradient method. The unified framework exhibits a continuous change from the gradient method to Nesterov's accelerated gradient method and finally the heavy-ball method incurred by a free parameter, which can help explain a similar change observed in the testing error convergence behavior for deep learning. Furthermore, our empirical results for optimizing deep neural networks demonstrate that the stochastic variant of Nesterov's accelerated gradient method achieves a good tradeoff (between speed of convergence in training error and robustness of convergence in testing error) among the three stochastic methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/30/2018

A Unified Analysis of Stochastic Momentum Methods for Deep Learning

Stochastic momentum methods have been widely adopted in training deep ne...
research
02/15/2021

The Role of Momentum Parameters in the Optimal Convergence of Adaptive Polyak's Heavy-ball Methods

The adaptive stochastic gradient descent (SGD) with momentum has been wi...
research
06/10/2019

Analysis Of Momentum Methods

Gradient decent-based optimization methods underpin the parameter traini...
research
12/26/2017

IHT dies hard: Provable accelerated Iterative Hard Thresholding

We study --both in theory and practice-- the use of momentum motions in ...
research
12/20/2017

ADINE: An Adaptive Momentum Method for Stochastic Gradient Descent

Two major momentum-based techniques that have achieved tremendous succes...
research
06/02/2019

Generalized Momentum-Based Methods: A Hamiltonian Perspective

We take a Hamiltonian-based perspective to generalize Nesterov's acceler...
research
05/09/2023

UAdam: Unified Adam-Type Algorithmic Framework for Non-Convex Stochastic Optimization

Adam-type algorithms have become a preferred choice for optimisation in ...

Please sign up or login with your details

Forgot password? Click here to reset