Making mean-estimation more efficient using an MCMC trace variance approach: DynaMITE

11/22/2020 βˆ™ by Cyrus Cousins, et al. βˆ™ 0 βˆ™

The Markov-Chain Monte-Carlo (MCMC) method has been used widely in the literature for various applications, in particular estimating the expectation 𝔼_Ο€[f] of a function f:Ξ©β†’ [a,b] over a distribution Ο€ on Ξ© (a.k.a. mean-estimation), to within Ξ΅ additive error (w.h.p.). Letting R ≐ b-a, standard variance-agnostic MCMC mean-estimators run the chain for Γ•(TR^2/Ξ΅^2) steps, when given as input an (often loose) upper-bound T on the relaxation time Ο„_ rel. When an upper-bound V on the stationary variance v_π≐𝕍_Ο€[f] is known, Γ•(TR/Ξ΅+TV/Ξ΅^2) steps suffice. We introduce the DYNAmic Mcmc Inter-Trace variance Estimation (DynaMITE) algorithm for mean-estimation. We define the inter-trace variance v_T for any trace length T, and show that w.h.p., DynaMITE estimates the mean within Ξ΅ additive error within Γ•(TR/Ξ΅ + Ο„_ rel v_Ο„ rel/Ξ΅^2) steps, without a priori bounds on v_Ο€, the variance of f, or the trace variance v_T. When Ο΅ is small, the dominating term is Ο„_ rel v_Ο„ rel, thus the complexity of DynaMITE principally depends on the a priori unknown Ο„_ rel and v_Ο„ rel. We believe in many situations v_T=o(v_Ο€), and we identify two cases to demonstrate it. Furthermore, it always holds that v_Ο„ rel≀ 2v_Ο€, thus the worst-case complexity of DynaMITE is Γ•(TR/Ξ΅ +Ο„_ rel v_Ο€/Ξ΅^2), improving the dependence of classical methods on the loose bounds T and V.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.