A Probabilistically Motivated Learning Rate Adaptation for Stochastic Optimization

02/22/2021
by   Filip de Roos, et al.
0

Machine learning practitioners invest significant manual and computational resources in finding suitable learning rates for optimization algorithms. We provide a probabilistic motivation, in terms of Gaussian inference, for popular stochastic first-order methods. As an important special case, it recovers the Polyak step with a general metric. The inference allows us to relate the learning rate to a dimensionless quantity that can be automatically adapted during training by a control algorithm. The resulting meta-algorithm is shown to adapt learning rates in a robust manner across a large range of initial values when applied to deep learning benchmark problems.

READ FULL TEXT

page 4

page 6

page 7

page 11

page 12

page 13

page 14

research
10/20/2021

Stochastic Learning Rate Optimization in the Stochastic Approximation and Online Learning Settings

In this work, multiplicative stochasticity is applied to the learning ra...
research
02/20/2019

Active Probabilistic Inference on Matrices for Pre-Conditioning in Stochastic Optimization

Pre-conditioning is a well-known concept that can significantly improve ...
research
10/18/2019

Robust Learning Rate Selection for Stochastic Optimization via Splitting Diagnostic

This paper proposes SplitSGD, a new stochastic optimization algorithm wi...
research
05/22/2017

Training Deep Networks without Learning Rates Through Coin Betting

Deep learning methods achieve state-of-the-art performance in many appli...
research
03/29/2018

A Stochastic Large-scale Machine Learning Algorithm for Distributed Features and Observations

As the size of modern data sets exceeds the disk and memory capacities o...
research
08/06/2023

Learning-Rate-Free Learning: Dissecting D-Adaptation and Probabilistic Line Search

This paper explores two recent methods for learning rate optimisation in...
research
02/28/2022

Amortized Proximal Optimization

We propose a framework for online meta-optimization of parameters that g...

Please sign up or login with your details

Forgot password? Click here to reset