
Note on the convergence time of some nonreversible Markov chain Monte Carlo methods
Introduced in statistical physics, nonreversible Markov chain Monte Car...
read it

Lifted samplers for partially ordered discrete statespaces
A technique called lifting is employed in practice for avoiding that the...
read it

A reversible infinite HMM using normalised random measures
We present a nonparametric prior over reversible Markov chains. We use c...
read it

NonReversible Parallel Tempering: an Embarassingly Parallel MCMC Scheme
Parallel tempering (PT) methods are a popular class of Markov chain Mont...
read it

Reversible Markov chains: variational representations and ordering
This pedagogical document explains three variational representations tha...
read it

Blang: Bayesian declarative modelling of arbitrary data structures
Consider a Bayesian inference problem where a variable of interest does ...
read it

Estimating the Mixing Time of Ergodic Markov Chains
We address the problem of estimating the mixing time t_mix of an arbitra...
read it
Nonreversible jump algorithms for Bayesian nested model selection
Nonreversible Markov chain Monte Carlo methods often outperform their reversible counterparts in terms of asymptotic variance of ergodic averages and mixing properties. Lifting the statespace (Chen et al., 1999; Diaconis et al., 2000) is a generic technique for constructing such samplers. The idea is to think of the random variables we want to generate as position variables and to associate to them direction variables so as to design Markov chains which do not have the diffusive behaviour often exhibited by reversible schemes. In this paper, we explore the benefits of using such ideas in the context of Bayesian model choice for nested models, a class of models for which the model indicator variable is an ordinal random variable. By lifting this model indicator variable, we obtain nonreversible jump algorithms, a nonreversible version of the popular reversible jump algorithms introduced by Green (1995). This simple algorithmic modification provides samplers which can empirically outperform their reversible counterparts at no extra computational cost. The code to reproduce all experiments is available online.
READ FULL TEXT
Comments
There are no comments yet.