Meta-Regularization: An Approach to Adaptive Choice of the Learning Rate in Gradient Descent

04/12/2021
by   Guangzeng Xie, et al.
0

We propose Meta-Regularization, a novel approach for the adaptive choice of the learning rate in first-order gradient descent methods. Our approach modifies the objective function by adding a regularization term on the learning rate, and casts the joint updating process of parameters and learning rates into a maxmin problem. Given any regularization term, our approach facilitates the generation of practical algorithms. When Meta-Regularization takes the φ-divergence as a regularizer, the resulting algorithms exhibit comparable theoretical convergence performance with other first-order gradient-based algorithms. Furthermore, we theoretically prove that some well-designed regularizers can improve the convergence performance under the strong-convexity condition of the objective function. Numerical experiments on benchmark problems demonstrate the effectiveness of algorithms derived from some common φ-divergence in full batch as well as online learning settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/27/2018

Gradient descent revisited via an adaptive online learning rate

Any gradient descent optimization requires to choose a learning rate. Wi...
research
08/17/2020

Adaptive Multi-level Hyper-gradient Descent

Adaptive learning rates can lead to faster convergence and better final ...
research
02/14/2020

Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function

This article suggests that deterministic Gradient Descent, which does no...
research
09/04/2021

On Faster Convergence of Scaled Sign Gradient Descent

Communication has been seen as a significant bottleneck in industrial ap...
research
07/02/2016

A Greedy Approach to Adapting the Trace Parameter for Temporal Difference Learning

One of the main obstacles to broad application of reinforcement learning...
research
08/23/2023

Non-ergodic linear convergence property of the delayed gradient descent under the strongly convexity and the Polyak-Łojasiewicz condition

In this work, we establish the linear convergence estimate for the gradi...
research
09/14/2020

A Qualitative Study of the Dynamic Behavior of Adaptive Gradient Algorithms

The dynamic behavior of RMSprop and Adam algorithms is studied through a...

Please sign up or login with your details

Forgot password? Click here to reset