Conditional Monte Carlo revisited

10/14/2020 ∙ by Bo Henry Lindqvist, et al. ∙ 0

Conditional Monte Carlo refers to sampling from the conditional distribution of a random vector X given the value T(X) = t for a function T(X). Classical conditional Monte Carlo methods were designed for estimating conditional expectations of functions of X by sampling from unconditional distributions obtained by certain weighting schemes. The basic ingredients were the use of importance sampling and change of variables. In the present paper we reformulate the problem by introducing an artificial parametric model, representing the conditional distribution of X given T(X)=t within this new model. The key is to provide the parameter of the artificial model by a distribution. The approach is illustrated by several examples, which are particularly chosen to illustrate conditional sampling in cases where such sampling is not straightforward. A simulation study and an application to goodness-of-fit testing of real data are also given.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.