DeepAI AI Chat
Log In Sign Up

Time-Delay Momentum: A Regularization Perspective on the Convergence and Generalization of Stochastic Momentum for Deep Learning

03/02/2019
by   Ziming Zhang, et al.
0

In this paper we study the problem of convergence and generalization error bound of stochastic momentum for deep learning from the perspective of regularization. To do so, we first interpret momentum as solving an ℓ_2-regularized minimization problem to learn the offsets between arbitrary two successive model parameters. We call this time-delay momentum because the model parameter is updated after a few iterations towards finding the minimizer. We then propose our learning algorithm, stochastic gradient descent (SGD) with time-delay momentum. We show that our algorithm can be interpreted as solving a sequence of strongly convex optimization problems using SGD. We prove that under mild conditions our algorithm can converge to a stationary point with rate of O(1/√(K)) and generalization error bound of O(1/√(nδ)) with probability at least 1-δ, where K,n are the numbers of model updates and training samples, respectively. We demonstrate the empirical superiority of our algorithm in deep learning in comparison with the state-of-the-art deep learning solvers.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/24/2020

Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent

Stochastic gradient descent (SGD) with constant momentum and its variant...
06/05/2021

Escaping Saddle Points Faster with Stochastic Momentum

Stochastic gradient descent (SGD) with stochastic momentum is popular in...
06/04/2020

Robust Sampling in Deep Learning

Deep learning requires regularization mechanisms to reduce overfitting a...
03/31/2021

Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization

It is well-known that stochastic gradient noise (SGN) acts as implicit r...
10/21/2019

Momentum in Reinforcement Learning

We adapt the optimization's concept of momentum to reinforcement learnin...
07/25/2019

DEAM: Accumulated Momentum with Discriminative Weight for Stochastic Optimization

Optimization algorithms with momentum, e.g., Nesterov Accelerated Gradie...
05/31/2016

Asynchrony begets Momentum, with an Application to Deep Learning

Asynchronous methods are widely used in deep learning, but have limited ...