Convergence of stochastic gradient descent on parameterized sphere with applications to variational Monte Carlo simulation

03/21/2023
by   Nilin Abrahamsen, et al.
0

We analyze stochastic gradient descent (SGD) type algorithms on a high-dimensional sphere which is parameterized by a neural network up to a normalization constant. We provide a new algorithm for the setting of supervised learning and show its convergence both theoretically and numerically. We also provide the first proof of convergence for the unsupervised setting, which corresponds to the widely used variational Monte Carlo (VMC) method in quantum physics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/25/2019

Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Learning in the Big Data Regime

Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) is a momentum versio...
research
03/26/2018

On the Performance of Preconditioned Stochastic Gradient Descent

This paper studies the performance of preconditioned stochastic gradient...
research
09/11/2023

Stochastic Gradient Descent-like relaxation is equivalent to Glauber dynamics in discrete optimization and inference problems

Is Stochastic Gradient Descent (SGD) substantially different from Glaube...
research
02/13/2020

Stochastic Approximate Gradient Descent via the Langevin Algorithm

We introduce a novel and efficient algorithm called the stochastic appro...
research
06/19/2021

Rayleigh-Gauss-Newton optimization with enhanced sampling for variational Monte Carlo

Variational Monte Carlo (VMC) is an approach for computing ground-state ...
research
11/02/2018

Non-Asymptotic Guarantees For Sampling by Stochastic Gradient Descent

Sampling from various kinds of distributions is an issue of paramount im...
research
10/13/2022

Noise can be helpful for variational quantum algorithms

Saddle points constitute a crucial challenge for first-order gradient de...

Please sign up or login with your details

Forgot password? Click here to reset