Weighted batch means estimators in Markov chain Monte Carlo

05/21/2018
by   Ying Liu, et al.
0

This paper proposes a family of weighted batch means variance estimators, which are computationally efficient and can be conveniently applied in practice. The focus is on Markov chain Monte Carlo simulations and estimation of the asymptotic covariance matrix in the Markov chain central limit theorem, where conditions ensuring strong consistency are provided. Finite sample performance is evaluated through auto-regressive, Bayesian spatial-temporal, and Bayesian logistic regression examples, where the new estimators show significant computational gains with a minor sacrifice in variance compared with existing methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2021

A Short Review of Ergodicity and Convergence of Markov chain Monte Carlo Estimators

This short note reviews the basic theory for quantifying both the asympt...
research
09/12/2018

Lugsail lag windows and their application to MCMC

Lag windows are commonly used in the time series, steady state simulatio...
research
09/12/2022

On Nonparametric Estimation in Online Problems

Offline estimators are often inadequate for real-time applications. Neve...
research
09/03/2020

Globally-centered autocovariances in MCMC

Autocovariances are a fundamental quantity of interest in Markov chain M...
research
09/19/2018

Efficient sampling of conditioned Markov jump processes

We consider the task of generating draws from a Markov jump process (MJP...
research
03/15/2012

A Family of Computationally Efficient and Simple Estimators for Unnormalized Statistical Models

We introduce a new family of estimators for unnormalized statistical mod...
research
11/13/2022

Multivariate strong invariance principles in Markov chain Monte Carlo

Strong invariance principles in Markov chain Monte Carlo are crucial to ...

Please sign up or login with your details

Forgot password? Click here to reset