SADAM: Stochastic Adam, A Stochastic Operator for First-Order Gradient-based Optimizer

05/20/2022
by   Wei Zhang, et al.
4

In this work, to efficiently help escape the stationary and saddle points, we propose, analyze, and generalize a stochastic strategy performed as an operator for a first-order gradient descent algorithm in order to increase the target accuracy and reduce time consumption. Unlike existing algorithms, the proposed stochastic the strategy does not require any batches and sampling techniques, enabling efficient implementation and maintaining the initial first-order optimizer's convergence rate, but provides an incomparable improvement of target accuracy when optimizing the target functions. In short, the proposed strategy is generalized, applied to Adam, and validated via the decomposition of biomedical signals using Deep Matrix Fitting and another four peer optimizers. The validation results show that the proposed random strategy can be easily generalized for first-order optimizers and efficiently improve the target accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2020

Human Body Model Fitting by Learned Gradient Descent

We propose a novel algorithm for the fitting of 3D human shape to images...
research
06/22/2020

Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime

We analyze the convergence of the averaged stochastic gradient descent f...
research
01/19/2022

Battle royale optimizer with a new movement strategy

Gamed-based is a new stochastic metaheuristics optimization category tha...
research
05/27/2022

Competitive Gradient Optimization

We study the problem of convergence to a stationary point in zero-sum ga...
research
09/26/2021

Curvature Injected Adaptive Momentum Optimizer for Convolutional Neural Networks

In this paper, we propose a new approach, hereafter referred as AdaInjec...
research
09/12/2016

Less than a Single Pass: Stochastically Controlled Stochastic Gradient Method

We develop and analyze a procedure for gradient-based optimization that ...
research
06/25/2021

Ranger21: a synergistic deep learning optimizer

As optimizers are critical to the performances of neural networks, every...

Please sign up or login with your details

Forgot password? Click here to reset