Regeneration-enriched Markov processes with application to Monte Carlo

10/11/2019
by   Andi Q. Wang, et al.
0

We study a class of Markov processes comprising local dynamics governed by a fixed Markov process which are enriched with regenerations from a fixed distribution at a state-dependent rate. We give conditions under which such processes possess a given target distribution as their invariant measures, thus making them amenable for use within Monte Carlo methodologies. Enrichment imparts a number of desirable theoretical and methodological properties: Since the regeneration mechanism can compensate the choice of local dynamics, while retaining the same invariant distribution, great flexibility can be achieved in selecting local dynamics, and mathematical analysis is simplified. In addition we give straightforward conditions for the process to be uniformly ergodic and possess a coupling from the past construction that enables exact sampling from the invariant distribution. More broadly, the sampler can also be used as a recipe for introducing rejection-free moves into existing Markov Chain Monte Carlo samplers in continuous time.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/13/2022

Boost your favorite Markov Chain Monte Carlo sampler using Kac's theorem: the Kick-Kac teleportation algorithm

The present paper focuses on the problem of sampling from a given target...
research
12/17/2020

A fresh take on 'Barker dynamics' for MCMC

We study a recently introduced gradient-based Markov chain Monte Carlo m...
research
10/18/2022

Sampling using Adaptive Regenerative Processes

Enriching Brownian Motion with regenerations from a fixed regeneration d...
research
02/25/2019

Sampling Sup-Normalized Spectral Functions for Brown-Resnick Processes

Sup-normalized spectral functions form building blocks of max-stable and...
research
06/17/2021

Towards sampling complex actions

Path integrals with complex actions are encountered for many physical sy...
research
06/12/2019

Conditional Monte Carlo for Reaction Networks

Reaction networks are often used to model interacting species in fields ...
research
11/04/2021

Ex^2MCMC: Sampling through Exploration Exploitation

We develop an Explore-Exploit Markov chain Monte Carlo algorithm (Ex^2MC...

Please sign up or login with your details

Forgot password? Click here to reset