AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly

05/22/2021
by   Yuchen Jin, et al.
0

The learning rate (LR) schedule is one of the most important hyper-parameters needing careful tuning in training DNNs. However, it is also one of the least automated parts of machine learning systems and usually costs significant manual effort and computing. Though there are pre-defined LR schedules and optimizers with adaptive LR, they introduce new hyperparameters that need to be tuned separately for different tasks/datasets. In this paper, we consider the question: Can we automatically tune the LR over the course of training without human involvement? We propose an efficient method, AutoLRS, which automatically optimizes the LR for each training stage by modeling training dynamics. AutoLRS aims to find an LR applied to every τ steps that minimizes the resulted validation loss. We solve this black-box optimization on the fly by Bayesian optimization (BO). However, collecting training instances for BO requires a system to evaluate each LR queried by BO's acquisition function for τ steps, which is prohibitively expensive in practice. Instead, we apply each candidate LR for only τ'≪τ steps and train an exponential model to predict the validation loss after τ steps. This mutual-training process between BO and the loss-prediction model allows us to limit the training steps invested in the BO search. We demonstrate the advantages and the generality of AutoLRS through extensive experiments of training DNNs for tasks from diverse domains using different optimizers. The LR schedules auto-generated by AutoLRS lead to a speedup of 1.22×, 1.43×, and 1.5× when training ResNet-50, Transformer, and BERT, respectively, compared to the LR schedules in their original papers, and an average speedup of 1.31× over state-of-the-art heavily-tuned LR schedules.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/13/2020

k-decay: A New Method For Learning Rate Schedule

It is well known that the learning rate is the most important hyper-para...
research
09/20/2019

Learning an Adaptive Learning Rate Schedule

The learning rate is one of the most important hyper-parameters for mode...
research
12/17/2018

Bayesian Optimization in AlphaGo

During the development of AlphaGo, its many hyper-parameters were tuned ...
research
11/02/2022

PI is back! Switching Acquisition Functions in Bayesian Optimization

Bayesian Optimization (BO) is a powerful, sample-efficient technique to ...
research
06/25/2020

Automatic Tuning of Stochastic Gradient Descent with Bayesian Optimisation

Many machine learning models require a training procedure based on runni...
research
12/16/2022

An Efficient Framework for Monitoring Subgroup Performance of Machine Learning Systems

Monitoring machine learning systems post deployment is critical to ensur...
research
11/28/2022

Mathematically Modeling the Lexicon Entropy of Emergent Language

We formulate a stochastic process, FiLex, as a mathematical model of lex...

Please sign up or login with your details

Forgot password? Click here to reset