On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization

08/16/2018
by   Dongruo Zhou, et al.
6

Adaptive gradient methods are workhorses in deep learning. However, the convergence guarantees of adaptive gradient methods for nonconvex optimization have not been sufficiently studied. In this paper, we provide a sharp analysis of a recently proposed adaptive gradient method namely partially adaptive momentum estimation method (Padam) (Chen and Gu, 2018), which admits many existing adaptive gradient methods such as AdaGrad, RMSProp and AMSGrad as special cases. Our analysis shows that, for smooth nonconvex functions, Padam converges to a first-order stationary point at the rate of O((∑_i=1^dg_1:T,i_2)^1/2/T^3/4 + d/T), where T is the number of iterations, d is the dimension, g_1,...,g_T are the stochastic gradients, and g_1:T,i = [g_1,i,g_2,i,...,g_T,i]^. Our theoretical result also suggests that in order to achieve faster convergence rate, it is necessary to use Padam instead of AMSGrad. This is well-aligned with the empirical results of deep learning reported in Chen and Gu (2018).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset