Concavifiability and convergence: necessary and sufficient conditions for gradient descent analysis

05/28/2019
by   Thulasi Tholeti, et al.
0

Convergence of the gradient descent algorithm has been attracting renewed interest due to its utility in deep learning applications. Even as multiple variants of gradient descent were proposed, the assumption that the gradient of the objective is Lipschitz continuous remained an integral part of the analysis until recently. In this work, we look at convergence analysis by focusing on a property that we term as concavifiability, instead of Lipschitz continuity of gradients. We show that concavifiability is a necessary and sufficient condition to satisfy the upper quadratic approximation which is key in proving that the objective function decreases after every gradient descent update. We also show that any gradient Lipschitz function satisfies concavifiability. A constant known as the concavifier analogous to the gradient Lipschitz constant is derived which is indicative of the optimal step size. As an application, we demonstrate the utility of finding the concavifier the in convergence of gradient descent through an example inspired by neural networks. We derive bounds on the concavifier to obtain a fixed step size for a single hidden layer ReLU network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2019

LOSSGRAD: automatic learning rate in gradient descent

In this paper, we propose a simple, fast and easy to implement algorithm...
research
06/08/2022

On Gradient Descent Convergence beyond the Edge of Stability

Gradient Descent (GD) is a powerful workhorse of modern machine learning...
research
12/13/2022

Self-adaptive algorithms for quasiconvex programming and applications to machine learning

For solving a broad class of nonconvex programming problems on an unboun...
research
02/18/2018

Convergence of Online Mirror Descent Algorithms

In this paper we consider online mirror descent (OMD) algorithms, a clas...
research
06/03/2021

Robust Learning via Persistency of Excitation

Improving adversarial robustness of neural networks remains a major chal...
research
06/07/2023

Achieving Consensus over Compact Submanifolds

We consider the consensus problem in a decentralized network, focusing o...
research
09/28/2020

Learning Deep ReLU Networks Is Fixed-Parameter Tractable

We consider the problem of learning an unknown ReLU network with respect...

Please sign up or login with your details

Forgot password? Click here to reset