
Manifold lifting: scaling MCMC to the vanishing noise regime
Standard Markov chain Monte Carlo methods struggle to explore distributi...
read it

Stratified stochastic variational inference for highdimensional network factor model
There has been considerable recent interest in Bayesian modeling of high...
read it

Semisupervised deep learning for highdimensional uncertainty quantification
Conventional uncertainty quantification methods usually lacks the capabi...
read it

Variational AutoDecoder
Learning a generative model from partial data (data with missingness) is...
read it

GeometryAware Hamiltonian Variational AutoEncoder
Variational autoencoders (VAEs) have proven to be a well suited tool fo...
read it

Highdimensional Stochastic Inversion via Adjoint Models and Machine Learning
Performing stochastic inversion on a computationally expensive forward s...
read it

Sampling constrained probability distributions using Spherical Augmentation
Statistical models with constrained probability distributions are abunda...
read it
Deep Markov Chain Monte Carlo
We propose a new computationally efficient sampling scheme for Bayesian inference involving high dimensional probability distributions. Our method maps the original parameter space into a lowdimensional latent space, explores the latent space to generate samples, and maps these samples back to the original space for inference. While our method can be used in conjunction with any dimension reduction technique to obtain the latent space, and any standard sampling algorithm to explore the lowdimensional space, here we specifically use a combination of autoencoders (for dimensionality reduction) and Hamiltonian Monte Carlo (HMC, for sampling). To this end, we first run an HMC to generate some initial samples from the original parameter space, and then use these samples to train an autoencoder. Next, starting with an initial state, we use the encoding part of the autoencoder to map the initial state to a point in the lowdimensional latent space. Using another HMC, this point is then treated as an initial state in the latent space to generate a new state, which is then mapped to the original space using the decoding part of the autoencoder. The resulting point can be treated as a MetropolisHasting (MH) proposal, which is either accepted or rejected. While the induced dynamics in the parameter space is no longer Hamiltonian, it remains time reversible, and the Markov chain could still converge to the canonical distribution using a volume correction term. Dropping the volume correction step results in convergence to an approximate but reasonably accurate distribution. The empirical results based on several highdimensional problems show that our method could substantially reduce the computational cost of Bayesian inference.
READ FULL TEXT
Comments
There are no comments yet.