Accelerating Convergence of Replica Exchange Stochastic Gradient MCMC via Variance Reduction

10/02/2020
by   Wei Deng, et al.
8

Replica exchange stochastic gradient Langevin dynamics (reSGLD) has shown promise in accelerating the convergence in non-convex learning; however, an excessively large correction for avoiding biases from noisy energy estimators has limited the potential of the acceleration. To address this issue, we study the variance reduction for noisy energy estimators, which promotes much more effective swaps. Theoretically, we provide a non-asymptotic analysis on the exponential acceleration for the underlying continuous-time Markov jump process; moreover, we consider a generalized Girsanov theorem which includes the change of Poisson measure to overcome the crude discretization based on the Gröwall's inequality and yields a much tighter error in the 2-Wasserstein (𝒲_2) distance. Numerically, we conduct extensive experiments and obtain the state-of-the-art results in optimization and uncertainty estimates for synthetic experiments and image data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2020

Non-convex Learning via Replica Exchange Stochastic Gradient MCMC

Replica exchange Monte Carlo (reMC), also known as parallel tempering, i...
research
02/13/2017

Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis

Stochastic Gradient Langevin Dynamics (SGLD) is a popular variant of Sto...
research
03/18/2016

Katyusha: The First Direct Acceleration of Stochastic Gradient Methods

Nesterov's momentum trick is famously known for accelerating gradient de...
research
05/30/2019

On stochastic gradient Langevin dynamics with dependent data streams: the fully non-convex case

We consider the problem of sampling from a target distribution which is ...
research
07/04/2020

Accelerating Nonconvex Learning via Replica Exchange Langevin Diffusion

Langevin diffusion is a powerful method for nonconvex optimization, whic...
research
05/30/2023

Non-convex Bayesian Learning via Stochastic Gradient Markov Chain Monte Carlo

The rise of artificial intelligence (AI) hinges on the efficient trainin...
research
10/25/2022

A Dynamical System View of Langevin-Based Non-Convex Sampling

Non-convex sampling is a key challenge in machine learning, central to n...

Please sign up or login with your details

Forgot password? Click here to reset