Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs

06/12/2020 ∙ by Xunpeng Huang, et al. ∙ 0

Adaptive gradient methods have attracted much attention of machine learning communities due to the high efficiency. However their acceleration effect in practice, especially in neural network training, is hard to analyze, theoretically. The huge gap between theoretical convergence results and practical performances prevents further understanding of existing optimizers and the development of more advanced optimization methods. In this paper, we provide adaptive gradient methods a novel analysis with an additional mild assumption, and revise AdaGrad to for matching a better provable convergence rate. To find an ϵ-approximate first-order stationary point in non-convex objectives, we prove random shuffling achieves a Õ(T^-1/2) convergence rate, which is significantly improved by factors Õ(T^-1/4) and Õ(T^-1/6) compared with existing adaptive gradient methods and random shuffling SGD, respectively. To the best of our knowledge, it is the first time to demonstrate that adaptive gradient methods can deterministically be faster than SGD after finite epochs. Furthermore, we conduct comprehensive experiments to validate the additional mild assumption and the acceleration effect benefited from second moments and random shuffling.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.