DEAM: Accumulated Momentum with Discriminative Weight for Stochastic Optimization

07/25/2019
by   Jiyang Bai, et al.
0

Optimization algorithms with momentum, e.g., Nesterov Accelerated Gradient and ADAM, have been widely used for building deep learning models because of their faster convergence rates compared to stochastic gradient descent (SGD). Momentum is a method that helps accelerate SGD in the relevant directions in variable updating, which can minify the oscillations of variables update route. Optimization algorithms with momentum usually allocate a fixed hyperparameter (e.g., β_1) as the weight of the momentum term. However, using a fixed weight is not applicable to some situations, and such a hyper-parameter can be extremely hard to tune in applications. In this paper, we will introduce a new optimization algorithm, namely DEAM (Discriminative wEight on Accumulated Momentum). Instead of assigning the momentum term with a fixed weight, DEAM proposes to compute the momentum weight in the learning process automatically. DEAM also involves a "backtrack" term, which can help accelerate the algorithm convergence by restricting redundant updates. Extensive experiments have been done on several real-world datasets. The experimental results demonstrate that DEAM can achieve a faster convergence rate than the existing optimization algorithms in training both the classic machine learning models and the recent deep learning models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/18/2021

Training Deep Neural Networks with Adaptive Momentum Inspired by the Quadratic Optimization

Heavy ball momentum is crucial in accelerating (stochastic) gradient-bas...
research
02/01/2019

Compressing Gradient Optimizers via Count-Sketches

Many popular first-order optimization methods (e.g., Momentum, AdaGrad, ...
research
06/08/2021

Provably Faster Algorithms for Bilevel Optimization

Bilevel optimization has been widely applied in many important machine l...
research
02/04/2020

Efficient Riemannian Optimization on the Stiefel Manifold via the Cayley Transform

Strictly enforcing orthonormality constraints on parameter matrices has ...
research
02/13/2023

Symbolic Discovery of Optimization Algorithms

We present a method to formulate algorithm discovery as program search, ...
research
12/29/2018

SPI-Optimizer: an integral-Separated PI Controller for Stochastic Optimization

To overcome the oscillation problem in the classical momentum-based opti...
research
10/28/2022

Flatter, faster: scaling momentum for optimal speedup of SGD

Commonly used optimization algorithms often show a trade-off between goo...

Please sign up or login with your details

Forgot password? Click here to reset