-
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure
There is a stark disparity between the step size schedules used in pract...
read it
-
On the adequacy of untuned warmup for adaptive optimization
Adaptive optimization algorithms such as Adam (Kingma Ba, 2014) are ...
read it
-
Learning Rate Annealing Can Provably Help Generalization, Even for Convex Problems
Learning rate schedule can significantly affect generalization performan...
read it
-
Disentangling Adaptive Gradient Methods from Learning Rates
We investigate several confounding factors in the evaluation of optimiza...
read it
-
Learning an Adaptive Learning Rate Schedule
The learning rate is one of the most important hyper-parameters for mode...
read it
-
The Two Regimes of Deep Network Training
Learning rate schedule has a major impact on the performance of deep lea...
read it
-
A comparison of learning rate selection methods in generalized Bayesian inference
Generalized Bayes posterior distributions are formed by putting a fracti...
read it
Acceleration via Fractal Learning Rate Schedules
When balancing the practical tradeoffs of iterative methods for large-scale optimization, the learning rate schedule remains notoriously difficult to understand and expensive to tune. We demonstrate the presence of these subtleties even in the innocuous case when the objective is a convex quadratic. We reinterpret an iterative algorithm from the numerical analysis literature as what we call the Chebyshev learning rate schedule for accelerating vanilla gradient descent, and show that the problem of mitigating instability leads to a fractal ordering of step sizes. We provide some experiments and discussion to challenge current understandings of the "edge of stability" in deep learning: even in simple settings, provable acceleration can be obtained by making negative local progress on the objective.
READ FULL TEXT
Comments
There are no comments yet.