Meta-Regularization: An Approach to Adaptive Choice of the Learning Rate in Gradient Descent

04/12/2021
by   Guangzeng Xie, et al.
0

We propose Meta-Regularization, a novel approach for the adaptive choice of the learning rate in first-order gradient descent methods. Our approach modifies the objective function by adding a regularization term on the learning rate, and casts the joint updating process of parameters and learning rates into a maxmin problem. Given any regularization term, our approach facilitates the generation of practical algorithms. When Meta-Regularization takes the φ-divergence as a regularizer, the resulting algorithms exhibit comparable theoretical convergence performance with other first-order gradient-based algorithms. Furthermore, we theoretically prove that some well-designed regularizers can improve the convergence performance under the strong-convexity condition of the objective function. Numerical experiments on benchmark problems demonstrate the effectiveness of algorithms derived from some common φ-divergence in full batch as well as online learning settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/27/2018

Gradient descent revisited via an adaptive online learning rate

Any gradient descent optimization requires to choose a learning rate. Wi...
08/17/2020

Adaptive Multi-level Hyper-gradient Descent

Adaptive learning rates can lead to faster convergence and better final ...
09/04/2021

On Faster Convergence of Scaled Sign Gradient Descent

Communication has been seen as a significant bottleneck in industrial ap...
09/14/2020

A Qualitative Study of the Dynamic Behavior of Adaptive Gradient Algorithms

The dynamic behavior of RMSprop and Adam algorithms is studied through a...
02/14/2020

Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function

This article suggests that deterministic Gradient Descent, which does no...
07/02/2016

A Greedy Approach to Adapting the Trace Parameter for Temporal Difference Learning

One of the main obstacles to broad application of reinforcement learning...
02/28/2022

Amortized Proximal Optimization

We propose a framework for online meta-optimization of parameters that g...