Nonasymptotic analysis of Stochastic Gradient Hamiltonian Monte Carlo under local conditions for nonconvex optimization

02/13/2020 ∙ by Ömer Deniz Akyıldız, et al. ∙ 0

We provide a nonasymptotic analysis of the convergence of the stochastic gradient Hamiltonian Monte Carlo (SGHMC) to a target measure in Wasserstein-2 distance without assuming log-concavity. By making the dimension dependence explicit, we provide a uniform convergence rate of order O(η^1/4 ), where η is the step-size. Our results shed light onto the performance of the SGHMC methods compared to their overdamped counterparts, e.g., stochastic gradient Langevin dynamics (SGLD). Furthermore, our results also imply that the SGHMC, when viewed as a nonconvex optimizer, converges to a global minimum with the best known rates.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.