Adaptive Learning Rate and Momentum for Training Deep Neural Networks

06/22/2021
by   Zhiyong Hao, et al.
0

Recent progress on deep learning relies heavily on the quality and efficiency of training algorithms. In this paper, we develop a fast training method motivated by the nonlinear Conjugate Gradient (CG) framework. We propose the Conjugate Gradient with Quadratic line-search (CGQ) method. On the one hand, a quadratic line-search determines the step size according to current loss landscape. On the other hand, the momentum factor is dynamically updated in computing the conjugate gradient parameter (like Polak-Ribiere). Theoretical results to ensure the convergence of our method in strong convex settings is developed. And experiments in image classification datasets show that our method yields faster convergence than other local solvers and has better generalization capability (test set accuracy). One major advantage of the paper method is that tedious hand tuning of hyperparameters like the learning rate and momentum is avoided.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2020

Adaptive Gradient Method with Resilience and Momentum

Several variants of stochastic gradient descent (SGD) have been proposed...
research
03/01/2023

AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks

Sharpness aware minimization (SAM) optimizer has been extensively explor...
research
02/25/2020

Statistical Adaptive Stochastic Gradient Methods

We propose a statistical adaptive procedure called SALSA for automatical...
research
10/05/2022

Non-Convergence and Limit Cycles in the Adam optimizer

One of the most popular training algorithms for deep neural networks is ...
research
08/21/2023

We Don't Need No Adam, All We Need Is EVE: On The Variance of Dual Learning Rate And Beyond

In the rapidly advancing field of deep learning, optimising deep neural ...
research
02/13/2023

Symbolic Discovery of Optimization Algorithms

We present a method to formulate algorithm discovery as program search, ...
research
10/02/2021

Fast Line Search for Multi-Task Learning

Multi-task learning is a powerful method for solving several tasks joint...

Please sign up or login with your details

Forgot password? Click here to reset