AutoDrop: Training Deep Learning Models with Automatic Learning Rate Drop

11/30/2021
by   Yunfei Teng, et al.
0

Modern deep learning (DL) architectures are trained using variants of the SGD algorithm that is run with a manually defined learning rate schedule, i.e., the learning rate is dropped at the pre-defined epochs, typically when the training loss is expected to saturate. In this paper we develop an algorithm that realizes the learning rate drop automatically. The proposed method, that we refer to as AutoDrop, is motivated by the observation that the angular velocity of the model parameters, i.e., the velocity of the changes of the convergence direction, for a fixed learning rate initially increases rapidly and then progresses towards soft saturation. At saturation the optimizer slows down thus the angular velocity saturation is a good indicator for dropping the learning rate. After the drop, the angular velocity "resets" and follows the previously described pattern - it increases again until saturation. We show that our method improves over SOTA training approaches: it accelerates the training of DL models and leads to a better generalization. We also show that our method does not require any extra hyperparameter tuning. AutoDrop is furthermore extremely simple to implement and computationally cheap. Finally, we develop a theoretical framework for analyzing our algorithm and provide convergence guarantees.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/25/2022

Learning Rate Perturbation: A Generic Plugin of Learning Rate Schedule towards Flatter Local Minima

Learning rate is one of the most important hyper-parameters that has a s...
research
06/13/2019

Training Neural Networks for and by Interpolation

The majority of modern deep learning models are able to interpolate the ...
research
05/30/2021

LRTuner: A Learning Rate Tuner for Deep Neural Networks

One very important hyperparameter for training deep neural networks is t...
research
10/16/2019

An Exponential Learning Rate Schedule for Deep Learning

Intriguing empirical evidence exists that deep learning can work well wi...
research
06/13/2018

Boosted Training of Convolutional Neural Networks for Multi-Class Segmentation

Training deep neural networks on large and sparse datasets is still chal...
research
08/21/2023

We Don't Need No Adam, All We Need Is EVE: On The Variance of Dual Learning Rate And Beyond

In the rapidly advancing field of deep learning, optimising deep neural ...
research
02/24/2020

The Two Regimes of Deep Network Training

Learning rate schedule has a major impact on the performance of deep lea...

Please sign up or login with your details

Forgot password? Click here to reset