On the Stability and Convergence of Stochastic Gradient Descent with Momentum

09/12/2018
by   Ali Ramezani-Kebrya, et al.
0

While momentum-based methods, in conjunction with the stochastic gradient descent, are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods. In practice, the momentum parameter is often chosen in a heuristic fashion with little theoretical guidance. In the first part of this paper, for the case of general loss functions, we analyze a modified momentum-based update rule, i.e., the method of early momentum, and develop an upper-bound on the generalization error using the framework of algorithmic stability. Our results show that machine learning models can be trained for multiple epochs of this method while their generalization errors are bounded. We also study the convergence of the method of early momentum by establishing an upper-bound on the expected norm of the gradient. In the second part of the paper, we focus on the case of strongly convex loss functions and the classical heavy-ball momentum update rule. We use the framework of algorithmic stability to provide an upper-bound on the generalization error of the stochastic gradient method with momentum. We also develop an upper-bound on the expected true risk, in terms of the number of training steps, the size of the training set, and the momentum parameter. Experimental evaluations verify the consistency between the numerical results and our theoretical bounds and the effectiveness of the method of early momentum for the case of non-convex loss functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2021

On the Generalization of Stochastic Gradient Descent with Momentum

While momentum-based methods, in conjunction with stochastic gradient de...
research
08/30/2018

A Unified Analysis of Stochastic Momentum Methods for Deep Learning

Stochastic momentum methods have been widely adopted in training deep ne...
research
05/30/2019

Exploiting Uncertainty of Loss Landscape for Stochastic Optimization

We introduce novel variants of momentum by incorporating the variance of...
research
12/01/2020

Convergence of Gradient Algorithms for Nonconvex C^1+α Cost Functions

This paper is concerned with convergence of stochastic gradient algorith...
research
12/31/2021

High Dimensional Optimization through the Lens of Machine Learning

This thesis reviews numerical optimization methods with machine learning...
research
12/14/2020

Noisy Linear Convergence of Stochastic Gradient Descent for CV@R Statistical Learning under Polyak-Łojasiewicz Conditions

Conditional Value-at-Risk (CV@R) is one of the most popular measures of ...
research
06/10/2019

Analysis Of Momentum Methods

Gradient decent-based optimization methods underpin the parameter traini...

Please sign up or login with your details

Forgot password? Click here to reset