Momentum Improves Normalized SGD

02/09/2020
by   Ashok Cutkosky, et al.
0

We provide an improved analysis of normalized SGD showing that adding momentum provably removes the need for large batch sizes on non-convex objectives. Then, we consider the case of objectives with bounded second derivative and show that in this case a small tweak to the momentum formula allows normalized SGD with momentum to find an ϵ-critical point in O(1/ϵ^3.5) iterations, matching the best-known rates without accruing any logarithmic factors or dependence on dimension. We also provide an adaptive method that automatically improves convergence rates when the variance in the gradients is small. Finally, we show that our method is effective when employed on popular large scale tasks such as ResNet-50 and BERT pretraining, matching the performance of the disparate methods used to get state-of-the-art results on both tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2020

Understanding the Role of Momentum in Non-Convex Optimization: Practical Insights from a Lyapunov Analysis

Momentum methods are now used pervasively within the machine learning co...
research
04/24/2021

DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training

The scale of deep learning nowadays calls for efficient distributed trai...
research
07/31/2022

Formal guarantees for heuristic optimization algorithms used in machine learning

Recently, Stochastic Gradient Descent (SGD) and its variants have become...
research
03/04/2021

Correcting Momentum with Second-order Information

We develop a new algorithm for non-convex stochastic optimization that f...
research
06/28/2021

High-probability Bounds for Non-Convex Stochastic Optimization with Heavy Tails

We consider non-convex stochastic optimization using first-order algorit...
research
11/01/2021

STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization

In this work we investigate stochastic non-convex optimization problems ...
research
06/04/2020

Scaling Distributed Training with Adaptive Summation

Stochastic gradient descent (SGD) is an inherently sequential training a...

Please sign up or login with your details

Forgot password? Click here to reset