Rayleigh-Gauss-Newton optimization with enhanced sampling for variational Monte Carlo

06/19/2021
by   Robert J. Webber, et al.
0

Variational Monte Carlo (VMC) is an approach for computing ground-state wavefunctions that has recently become more powerful due to the introduction of neural network-based wavefunction parametrizations. However, efficiently training neural wavefunctions to converge to an energy minimum remains a difficult problem. In this work, we analyze optimization and sampling methods used in VMC and introduce alterations to improve their performance. First, based on theoretical convergence analysis in a noiseless setting, we motivate a new optimizer that we call the Rayleigh-Gauss-Newton method, which can improve upon gradient descent and natural gradient descent to achieve superlinear convergence with little added computational cost. Second, in order to realize this favorable comparison in the presence of stochastic noise, we analyze the effect of sampling error on VMC parameter updates and experimentally demonstrate that it can be reduced by the parallel tempering method. In particular, we demonstrate that RGN can be made robust to energy spikes that occur when new regions of configuration space become available to the sampler over the course of optimization. Finally, putting theory into practice, we apply our enhanced optimization and sampling methods to the transverse-field Ising and XXZ models on large lattices, yielding ground-state energy estimates with remarkably high accuracy after just 200-500 parameter updates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/07/2021

Quasi-Newton Quasi-Monte Carlo for variational Bayes

Many machine learning problems optimize an objective that must be measur...
research
04/30/2020

A simple geometric method for navigating the energy landscape of centroidal Voronoi tessellations

Finding optimal centroidal Voronoi tessellations (CVTs) of a 2D domain p...
research
03/21/2023

Convergence of stochastic gradient descent on parameterized sphere with applications to variational Monte Carlo simulation

We analyze stochastic gradient descent (SGD) type algorithms on a high-d...
research
10/29/2022

Neural network quantum state with proximal optimization: a ground-state searching scheme based on variational Monte Carlo

Neural network quantum states (NQS), incorporating with variational Mont...
research
05/16/2022

Training neural networks using Metropolis Monte Carlo and an adaptive variant

We examine the zero-temperature Metropolis Monte Carlo algorithm as a to...
research
03/19/2023

Provable Convergence of Variational Monte Carlo Methods

The Variational Monte Carlo (VMC) is a promising approach for computing ...
research
04/19/2022

A stochastic Stein Variational Newton method

Stein variational gradient descent (SVGD) is a general-purpose optimizat...

Please sign up or login with your details

Forgot password? Click here to reset