EAdam Optimizer: How ε Impact Adam

11/04/2020
by   Wei Yuan, et al.
1

Many adaptive optimization methods have been proposed and used in deep learning, in which Adam is regarded as the default algorithm and widely used in many deep learning frameworks. Recently, many variants of Adam, such as Adabound, RAdam and Adabelief, have been proposed and show better performance than Adam. However, these variants mainly focus on changing the stepsize by making differences on the gradient or the square of it. Motivated by the fact that suitable damping is important for the success of powerful second-order optimizers, we discuss the impact of the constant ϵ for Adam in this paper. Surprisingly, we can obtain better performance than Adam simply changing the position of ϵ. Based on this finding, we propose a new variant of Adam called EAdam, which doesn't need extra hyper-parameters or computational costs. We also discuss the relationships and differences between our method and Adam. Finally, we conduct extensive experiments on various popular tasks and models. Experimental results show that our method can bring significant improvement compared with Adam. Our code is available at https://github.com/yuanwei2019/EAdam-optimizer.

READ FULL TEXT
research
03/06/2023

Judging Adam: Studying the Performance of Optimization Methods on ML4SE Tasks

Solving a problem with a deep learning model requires researchers to opt...
research
02/26/2019

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

Adaptive optimization methods such as AdaGrad, RMSprop and Adam have bee...
research
11/16/2020

Mixing ADAM and SGD: a Combined Optimization Method

Optimization methods (optimizers) get special attention for the efficien...
research
07/03/2020

Descending through a Crowded Valley – Benchmarking Deep Learning Optimizers

Choosing the optimizer is among the most crucial decisions of deep learn...
research
01/22/2021

Gravity Optimizer: a Kinematic Approach on Optimization in Deep Learning

We introduce Gravity, another algorithm for gradient-based optimization....
research
09/05/2023

AdaPlus: Integrating Nesterov Momentum and Precise Stepsize Adjustment on AdamW Basis

This paper proposes an efficient optimizer called AdaPlus which integrat...
research
04/17/2023

Bridging Discrete and Backpropagation: Straight-Through and Beyond

Backpropagation, the cornerstone of deep learning, is limited to computi...

Please sign up or login with your details

Forgot password? Click here to reset