Convergence Analysis of Gradient Descent Algorithms with Proportional Updates

01/09/2018
by   Igor Gitman, et al.
0

The rise of deep learning in recent years has brought with it increasingly clever optimization methods to deal with complex, non-linear loss functions. These methods are often designed with convex optimization in mind, but have been shown to work well in practice even for the highly non-convex optimization associated with neural networks. However, one significant drawback of these methods when they are applied to deep learning is that the magnitude of the update step is sometimes disproportionate to the magnitude of the weights (much smaller or larger), leading to training instabilities such as vanishing and exploding gradients. An idea to combat this issue is gradient descent with proportional updates. Gradient descent with proportional updates was introduced in 2017. It was independently developed by You et al (Layer-wise Adaptive Rate Scaling (LARS) algorithm) and by Abu-El-Haija (PercentDelta algorithm). The basic idea of both of these algorithms is to make each step of the gradient descent proportional to the current weight norm and independent of the gradient magnitude. It is common in the context of new optimization methods to prove convergence or derive regret bounds under the assumption of Lipschitz continuity and convexity. However, even though LARS and PercentDelta were shown to work well in practice, there is no theoretical analysis of the convergence properties of these algorithms. Thus it is not clear if the idea of gradient descent with proportional updates is used in the optimal way, or if it could be improved by using a different norm or specific learning rate schedule, for example. Moreover, it is not clear if these algorithms can be extended to other problems, besides neural networks. We attempt to answer these questions by establishing the theoretical analysis of gradient descent with proportional updates, and verifying this analysis with empirical examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/10/2018

Theoretical Analysis of Auto Rate-Tuning by Batch Normalization

Batch Normalization (BN) has become a cornerstone of deep learning acros...
research
08/17/2020

A Realistic Example in 2 Dimension that Gradient Descent Takes Exponential Time to Escape Saddle Points

Gradient descent is a popular algorithm in optimization, and its perform...
research
03/14/2023

Low-Complexity Iterative Methods for Complex-Variable Matrix Optimization Problems in Frobenius Norm

Complex-variable matrix optimization problems (CMOPs) in Frobenius norm ...
research
09/29/2020

BAMSProd: A Step towards Generalizing the Adaptive Optimization Methods to Deep Binary Model

Recent methods have significantly reduced the performance degradation of...
research
07/25/2022

On the benefits of non-linear weight updates

Recent work has suggested that the generalisation performance of a DNN i...
research
06/25/2020

Learning compositional functions via multiplicative weight updates

Compositionality is a basic structural feature of both biological and ar...

Please sign up or login with your details

Forgot password? Click here to reset