DeepAI AI Chat
Log In Sign Up

LaProp: a Better Way to Combine Momentum with Adaptive Gradient

02/12/2020
by   Liu Ziyin, et al.
0

Identifying a divergence problem in Adam, we propose a new optimizer, LaProp, which belongs to the family of adaptive gradient descent methods. This method allows for greater flexibility in choosing its hyperparameters, mitigates the effort of fine tuning, and permits straightforward interpolation between the signed gradient methods and the adaptive gradient methods. We bound the regret of LaProp on a convex problem and show that our bound differs from the previous methods by a key factor, which demonstrates its advantage. We experimentally show that LaProp outperforms the previous methods on a toy task with noisy gradients, optimization of extremely deep fully-connected networks, neural art style transfer, natural language processing using transformers, and reinforcement learning with deep-Q networks. The performance improvement of LaProp is shown to be consistent, sometimes dramatic and qualitative.

READ FULL TEXT

page 8

page 16

10/05/2018

Where Did My Optimum Go?: An Empirical Analysis of Gradient Descent Optimization in Policy Gradient Methods

Recent analyses of certain gradient descent optimization methods have sh...
05/22/2017

Training Deep Networks without Learning Rates Through Coin Betting

Deep learning methods achieve state-of-the-art performance in many appli...
10/11/2019

On Empirical Comparisons of Optimizers for Deep Learning

Selecting an optimizer is a central step in the contemporary deep learni...
01/26/2021

Adaptivity without Compromise: A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization

We introduce MADGRAD, a novel optimization method in the family of AdaGr...
03/02/2022

Adaptive Gradient Methods with Local Guarantees

Adaptive gradient methods are the method of choice for optimization in m...
08/14/2020

Dimension Independence in Unconstrained Private ERM via Adaptive Preconditioning

In this paper we revisit the problem of private empirical risk minimziat...
05/15/2021

On the Distributional Properties of Adaptive Gradients

Adaptive gradient methods have achieved remarkable success in training d...

Code Repositories

LaProp-Optimizer

Codes accompanying the paper "LaProp: a Better Way to Combine Momentum with Adaptive Gradient"


view repo