HMC and Langevin united in the unadjusted and convex case

02/02/2022
by   Pierre Monmarché, et al.
0

We consider a family of unadjusted HMC samplers, which includes standard position HMC samplers and discretizations of the underdamped Langevin process. A detailed analysis and optimization of the parameters is conducted in the Gaussian case. Then, a stochastic gradient version of the samplers is considered, for which dimension-free convergence rates are established for log-concave smooth targets, gathering in a unified framework previous results on both processes. Both results indicate that partial refreshments of the velocity are more efficient than standard full refreshments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2023

Contraction Rate Estimates of Stochastic Gradient Kinetic Langevin Integrators

In previous work, we introduced a method for determining convergence rat...
research
07/15/2020

A General Family of Stochastic Proximal Gradient Methods for Deep Learning

We study the training of regularized neural networks where the regulariz...
research
06/07/2022

Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax Optimization

We analyze the convergence rates of stochastic gradient algorithms for s...
research
02/12/2021

Stability and Convergence of Stochastic Gradient Clipping: Beyond Lipschitz Continuity and Smoothness

Stochastic gradient algorithms are often unstable when applied to functi...
research
02/29/2020

Dimension-free convergence rates for gradient Langevin dynamics in RKHS

Gradient Langevin dynamics (GLD) and stochastic GLD (SGLD) have attracte...
research
03/06/2023

Convergence Rates for Non-Log-Concave Sampling and Log-Partition Estimation

Sampling from Gibbs distributions p(x) ∝exp(-V(x)/ε) and computing their...

Please sign up or login with your details

Forgot password? Click here to reset