LeapfrogLayers: A Trainable Framework for Effective Topological Sampling

12/02/2021
by   Sam Foreman, et al.
10

We introduce LeapfrogLayers, an invertible neural network architecture that can be trained to efficiently sample the topology of a 2D U(1) lattice gauge theory. We show an improvement in the integrated autocorrelation time of the topological charge when compared with traditional HMC, and propose methods for scaling our model to larger lattice volumes. Our implementation is open source, and is publicly available on github at https://github.com/saforem2/l2hmc-qcd

READ FULL TEXT

page 1

page 8

research
12/02/2021

HMC with Normalizing Flows

We propose using Normalizing Flows as a trainable kernel within the mole...
research
08/04/2022

Neural-network preconditioners for solving the Dirac equation in lattice gauge theory

This work develops neural-network–based preconditioners to accelerate so...
research
02/03/2022

RipsNet: a general architecture for fast and robust estimation of the persistent homology of point clouds

The use of topological descriptors in modern machine learning applicatio...
research
04/14/2022

GPT-NeoX-20B: An Open-Source Autoregressive Language Model

We introduce GPT-NeoX-20B, a 20 billion parameter autoregressive languag...
research
09/27/2019

A Topological Nomenclature for 3D Shape Analysis in Connectomics

An essential task in nano-scale connectomics is the morphology analysis ...
research
07/28/2021

TEDS-Net: Enforcing Diffeomorphisms in Spatial Transformers to Guarantee Topology Preservation in Segmentations

Accurate topology is key when performing meaningful anatomical segmentat...
research
03/30/2021

Continuous Weight Balancing

We propose a simple method by which to choose sample weights for problem...

Please sign up or login with your details

Forgot password? Click here to reset