DeepAI AI Chat
Log In Sign Up

Informed Proposal Monte Carlo

by   Sarouyeh Khoshkholgh, et al.

Any search or sampling algorithm for solution of inverse problems needs guidance to be efficient. Many algorithms collect and apply information about the problem on the fly, and much improvement has been made in this way. However, as a consequence of the the No-Free-Lunch Theorem, the only way we can ensure a significantly better performance of search and sampling algorithms is to build in as much information about the problem as possible. In the special case of Markov Chain Monte Carlo sampling (MCMC) we review how this is done through the choice of proposal distribution, and we show how this way of adding more information about the problem can be made particularly efficient when based on an approximate physics model of the problem. A highly nonlinear inverse scattering problem with a high-dimensional model space serves as an illustration of the gain of efficiency through this approach.


page 1

page 2

page 3

page 4


Adaptive Physics-Informed Neural Networks for Markov-Chain Monte Carlo

In this paper, we propose the Adaptive Physics-Informed Neural Networks ...

Accelerating MCMC algorithms through Bayesian Deep Networks

Markov Chain Monte Carlo (MCMC) algorithms are commonly used for their v...

Monte Carlo Inverse Folding

The RNA Inverse Folding problem comes from computational biology. The go...

Optimization assisted MCMC

Markov Chain Monte Carlo (MCMC) sampling methods are widely used but oft...

Sampling Sup-Normalized Spectral Functions for Brown-Resnick Processes

Sup-normalized spectral functions form building blocks of max-stable and...

High-dimensional Gaussian sampling: a review and a unifying approach based on a stochastic proximal point algorithm

Efficient sampling from a high-dimensional Gaussian distribution is an o...

Utilizing Network Structure to Bound the Convergence Rate in Markov Chain Monte Carlo Algorithms

We consider the problem of estimating the measure of subsets in very lar...