Discussion of "Unbiased Markov chain Monte Carlo with couplings" by Pierre E. Jacob, John O'Leary and Yves F. Atchadé

12/22/2019
by   Leah F. South, et al.
Lancaster
0

This is a contribution for the discussion on "Unbiased Markov chain Monte Carlo with couplings" by Pierre E. Jacob, John O'Leary and Yves F. Atchadé to appear in the Journal of the Royal Statistical Society Series B.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

10/13/2021

A Short Review of Ergodicity and Convergence of Markov chain Monte Carlo Estimators

This short note reviews the basic theory for quantifying both the asympt...
04/14/2020

An introduction to computational complexity in Markov Chain Monte Carlo methods

The aim of this work is to give an introduction to the theoretical backg...
03/09/2021

Unbiased approximation of posteriors via coupled particle Markov chain Monte Carlo

Markov chain Monte Carlo (MCMC) is a powerful methodology for the approx...
05/08/2017

Geometry and Dynamics for Markov Chain Monte Carlo

Markov Chain Monte Carlo methods have revolutionised mathematical comput...
05/05/2007

MIMO detection employing Markov Chain Monte Carlo

We propose a soft-output detection scheme for Multiple-Input-Multiple-Ou...
10/30/2010

Discussion of "Riemann manifold Langevin and Hamiltonian Monte Carlo methods" by M. Girolami and B. Calderhead

This technical report is the union of two contributions to the discussio...
11/19/2021

Analysis of autocorrelation times in Neural Markov Chain Monte Carlo simulations

We provide a deepened study of autocorrelations in Neural Markov Chain M...

References

Appendix A Derivation of the Upper Bound

The aim in what follows is to reproduce the proof of Proposition 1 in [5] whilst explicitly tracking the terms that are -dependent. To avoid reproducing large amounts of [5], we assume familiarity with the notation and quantities defined in that work.

The first part of the argument in [5] uses Assumption 1 to deduce that for some and all . Our first task is to explicitly compute the constant in terms of the quantities and in Assumption 1. To this end, we reproduce the argument alluded to in the paper:

It is then stated in the proof of Proposition 1 in [5] that where for some and all with ; we reproduce the implied argument to explicitly represent in terms of and next:

so we may take

(2)

where is a -independent constant that depends only on the law of the meeting time for the Markov chains. The constant is finite since .

The stylised bound that we present is rooted in the concept of the maximum mean discrepancy associated to the reproducing kernel Hilbert space , defined as

If then we have from the definition of the maximum mean discrepancy that

Taking to be the law of thus gives that

Thus we may take the constant in Assumption 1 to be

(3)

In what follows we let be a -independent constant that depends on the law of the Markov chain used. It is necessary to check that is finite. Let be the inner product in . The assumption that is a reproducing kernel Hilbert space means that , from the reproducing property and Cauchy-Schwarz. Since the kernel was assumed to satisfy , it follows that . Thus

Thus as required.

To complete the argument we proceed as follows:

where the final line follows from (3) and the fact that .