Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Learning in the Big Data Regime

03/25/2019
by   Huy N. Chau, et al.
0

Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) is a momentum version of stochastic gradient descent with properly injected Gaussian noise to find a global minimum. In this paper, non-asymptotic convergence analysis of SGHMC is given in the context of non-convex optimization, where subsampling techniques are used over an i.i.d dataset for gradient updates. Our results complement those of [RRT17] and improve on those of [GGZ18].

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2020

Nonasymptotic analysis of Stochastic Gradient Hamiltonian Monte Carlo under local conditions for nonconvex optimization

We provide a nonasymptotic analysis of the convergence of the stochastic...
research
03/21/2023

Convergence of stochastic gradient descent on parameterized sphere with applications to variational Monte Carlo simulation

We analyze stochastic gradient descent (SGD) type algorithms on a high-d...
research
02/26/2018

Analysis of Langevin Monte Carlo via convex optimization

In this paper, we provide new insights on the Unadjusted Langevin Algori...
research
01/22/2019

Non-Asymptotic Analysis of Fractional Langevin Monte Carlo for Non-Convex Optimization

Recent studies on diffusion-based sampling methods have shown that Lange...
research
12/22/2020

Projected Stochastic Gradient Langevin Algorithms for Constrained Sampling and Non-Convex Learning

Langevin algorithms are gradient descent methods with additive noise. Th...
research
02/28/2023

Non-convex shape optimization by dissipative Hamiltonian flows

Shape optimization with constraints given by partial differential equati...

Please sign up or login with your details

Forgot password? Click here to reset