Dyna: A Method of Momentum for Stochastic Optimization

05/13/2018
by   Zhidong Han, et al.
0

An algorithm is presented for momentum gradient descent optimization based on the first-order differential equation of the Newtonian dynamics. The fictitious mass is introduced to the dynamics of momentum for regularizing the adaptive stepsize of each individual parameter. The dynamic relaxation is adapted for stochastic optimization of nonlinear objective functions through an explicit time integration with varying damping ratio. The adaptive stepsize is optimized for each individual neural network layer based on the number of inputs. The adaptive stepsize for every parameter over the entire neural network is uniformly optimized with one upper bound, independent of sparsity, for better overall convergence rate. The numerical implementation of the algorithm is similar to the Adam Optimizer, possessing computational efficiency, similar memory requirements, etc. There are three hyper-parameters in the algorithm with clear physical interpretation. Preliminary trials show promise in performance and convergence.

READ FULL TEXT
research
05/29/2020

CoolMomentum: A Method for Stochastic Optimization by Langevin Dynamics with Simulated Annealing

Deep learning applications require optimization of nonconvex objective f...
research
12/29/2018

SPI-Optimizer: an integral-Separated PI Controller for Stochastic Optimization

To overcome the oscillation problem in the classical momentum-based opti...
research
12/07/2020

Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance

In this paper, a general stochastic optimization procedure is studied, u...
research
09/03/2020

Distributed Online Optimization via Gradient Tracking with Adaptive Momentum

This paper deals with a network of computing agents aiming to solve an o...
research
07/27/2022

FASFA: A Novel Next-Generation Backpropagation Optimizer

This paper introduces the fast adaptive stochastic function accelerator ...
research
07/02/2021

Momentum Accelerates the Convergence of Stochastic AUPRC Maximization

In this paper, we study stochastic optimization of areas under precision...
research
07/07/2021

KaFiStO: A Kalman Filtering Framework for Stochastic Optimization

Optimization is often cast as a deterministic problem, where the solutio...

Please sign up or login with your details

Forgot password? Click here to reset