The Wang-Landau Algorithm as Stochastic Optimization and its Acceleration

07/27/2019
by   Chenguang Dai, et al.
0

We show that the Wang-Landau algorithm can be formulated as a stochastic gradient descent algorithm minimizing a smooth and convex objective function, of which the gradient is estimated using Markov Chain Monte Carlo iterations. The optimization formulation provides a new perspective for improving the efficiency of the Wang-Landau algorithm using optimization tools. We propose one possible improvement, based on the momentum method and the adaptive learning rate idea, and demonstrate it on a two-dimensional Ising model and a two-dimensional ten-state Potts model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/09/2022

Improved Binary Forward Exploration: Learning Rate Scheduling Method for Stochastic Optimization

A new gradient-based optimization approach by automatically scheduling t...
research
02/07/2023

Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy

We present an algorithm for minimizing an objective with hard-to-compute...
research
02/05/2020

AdaGeo: Adaptive Geometric Learning for Optimization and Sampling

Gradient-based optimization and Markov Chain Monte Carlo sampling can be...
research
03/29/2017

Probabilistic Line Searches for Stochastic Optimization

In deterministic optimization, line searches are a standard tool ensurin...
research
03/14/2019

Deep Switch Networks for Generating Discrete Data and Language

Multilayer switch networks are proposed as artificial generators of high...
research
02/04/2019

Is There an Analog of Nesterov Acceleration for MCMC?

We formulate gradient-based Markov chain Monte Carlo (MCMC) sampling as ...
research
10/16/2015

SGD with Variance Reduction beyond Empirical Risk Minimization

We introduce a doubly stochastic proximal gradient algorithm for optimiz...

Please sign up or login with your details

Forgot password? Click here to reset