Exploring the Optimized Value of Each Hyperparameter in Various Gradient Descent Algorithms

12/23/2022
by   Abel C. H. Chen, et al.
0

In the recent years, various gradient descent algorithms including the methods of gradient descent, gradient descent with momentum, adaptive gradient (AdaGrad), root-mean-square propagation (RMSProp) and adaptive moment estimation (Adam) have been applied to the parameter optimization of several deep learning models with higher accuracies or lower errors. These optimization algorithms may need to set the values of several hyperparameters which include a learning rate, momentum coefficients, etc. Furthermore, the convergence speed and solution accuracy may be influenced by the values of hyperparameters. Therefore, this study proposes an analytical framework to use mathematical models for analyzing the mean error of each objective function based on various gradient descent algorithms. Moreover, the suitable value of each hyperparameter could be determined by minimizing the mean error. The principles of hyperparameter value setting have been generalized based on analysis results for model optimization. The experimental results show that higher efficiency convergences and lower errors can be obtained by the proposed method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/11/2019

Gradient Descent based Optimization Algorithms for Deep Learning Models Training

In this paper, we aim at providing an introduction to the gradient desce...
research
02/19/2021

Local Convergence of Adaptive Gradient Descent Optimizers

Adaptive Moment Estimation (ADAM) is a very popular training algorithm f...
research
02/01/2023

How to Prove the Optimized Values of Hyperparameters for Particle Swarm Optimization?

In recent years, several swarm intelligence optimization algorithms have...
research
03/03/2023

Error convergence and engineering-guided hyperparameter search of PINNs: towards optimized I-FENN performance

In this paper, we aim at enhancing the performance of our proposed I-FEN...
research
01/24/2020

PairNets: Novel Fast Shallow Artificial Neural Networks on Partitioned Subspaces

Traditionally, an artificial neural network (ANN) is trained slowly by a...
research
02/10/2020

Pairwise Neural Networks (PairNets) with Low Memory for Fast On-Device Applications

A traditional artificial neural network (ANN) is normally trained slowly...
research
08/12/2019

BGD-based Adam algorithm for time-domain equalizer in PAM-based optical interconnects

To the best of our knowledge, for the first time, we propose adaptive mo...

Please sign up or login with your details

Forgot password? Click here to reset