Demystifying Learning Rate Polices for High Accuracy Training of Deep Neural Networks

08/18/2019
by   Yanzhao Wu, et al.
0

Learning Rate (LR) is an important hyper-parameter to tune for effective training of deep neural networks (DNNs). Even for the baseline of a constant learning rate, it is non-trivial to choose a good constant value for training a DNN. Dynamic learning rates involve multi-step tuning of LR values at various stages of the training process and offer high accuracy and fast convergence. However, they are much harder to tune. In this paper, we present a comprehensive study of 13 learning rate functions and their associated LR policies by examining their range parameters, step parameters, and value update parameters. We propose a set of metrics for evaluating and selecting LR policies, including the classification confidence, variance, cost, and robustness, and implement them in LRBench, an LR benchmarking system. LRBench can assist end-users and DNN developers to select good LR policies and avoid bad LR policies for training their DNNs. We tested LRBench on Caffe, an open source deep learning framework, to showcase the tuning optimization of LR policies. Evaluated through extensive experiments, we attempt to demystify the tuning of LR policies by identifying good LR policies with effective LR value ranges and step sizes for LR update schedules.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2022

Selecting and Composing Learning Rate Policies for Deep Neural Networks

The choice of learning rate (LR) functions and policies has evolved from...
research
06/05/2016

Deep Q-Networks for Accelerating the Training of Deep Neural Networks

In this paper, we propose a principled deep reinforcement learning (RL) ...
research
08/26/2015

EOS: Automatic In-vivo Evolution of Kernel Policies for Better Performance

Today's monolithic kernels often implement a small, fixed set of policie...
research
09/16/2023

Rethinking Learning Rate Tuning in the Era of Large Language Models

Large Language Models (LLMs) represent the recent success of deep learni...
research
03/22/2020

Tune smarter not harder: A principled approach to tuning learning rates for shallow nets

Effective hyper-parameter tuning is essential to guarantee the performan...
research
03/23/2021

Evolving Learning Rate Optimizers for Deep Neural Networks

Artificial Neural Networks (ANNs) became popular due to their successful...
research
05/27/2021

Training With Data Dependent Dynamic Learning Rates

Recently many first and second order variants of SGD have been proposed ...

Please sign up or login with your details

Forgot password? Click here to reset