Online Learning Rate Adaptation with Hypergradient Descent

03/14/2017
by   Atilim Gunes Baydin, et al.
0

We introduce a general method for improving the convergence rate of gradient-based optimizers that is easy to implement and works well in practice. We analyze the effectiveness of the method by applying it to stochastic gradient descent, stochastic gradient descent with Nesterov momentum, and Adam, showing that it improves upon these commonly used algorithms on a range of optimization problems; in particular the kinds of objective functions that arise frequently in deep neural network training. Our method works by dynamically updating the learning rate during optimization using the gradient with respect to the learning rate of the update rule itself. Computing this "hypergradient" needs little additional computation, requires only one extra copy of the original gradient to be stored in memory, and relies upon nothing more than what is provided by reverse-mode automatic differentiation.

READ FULL TEXT
research
02/01/2023

QLAB: Quadratic Loss Approximation-Based Optimal Learning Rate for Deep Learning

We propose a learning rate adaptation scheme, called QLAB, for descent o...
research
06/12/2021

Scaling transition from momentum stochastic gradient descent to plain stochastic gradient descent

The plain stochastic gradient descent and momentum stochastic gradient d...
research
06/25/2020

Automatic Tuning of Stochastic Gradient Descent with Bayesian Optimisation

Many machine learning models require a training procedure based on runni...
research
09/27/2022

The Curse of Unrolling: Rate of Differentiating Through Optimization

Computing the Jacobian of the solution of an optimization problem is a c...
research
07/27/2022

FASFA: A Novel Next-Generation Backpropagation Optimizer

This paper introduces the fast adaptive stochastic function accelerator ...
research
10/26/2022

Adaptive scaling of the learning rate by second order automatic differentiation

In the context of the optimization of Deep Neural Networks, we propose t...
research
06/04/2015

Rivalry of Two Families of Algorithms for Memory-Restricted Streaming PCA

We study the problem of recovering the subspace spanned by the first k p...

Please sign up or login with your details

Forgot password? Click here to reset