Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Learning in the Big Data Regime

03/25/2019
by   Huy N. Chau, et al.
0

Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) is a momentum version of stochastic gradient descent with properly injected Gaussian noise to find a global minimum. In this paper, non-asymptotic convergence analysis of SGHMC is given in the context of non-convex optimization, where subsampling techniques are used over an i.i.d dataset for gradient updates. Our results complement those of [RRT17] and improve on those of [GGZ18].

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset