Multilevel Monte Carlo estimation of expected information gains

11/19/2018
by   Takashi Goda, et al.
0

In this paper we develop an efficient Monte Carlo algorithm for estimating the expected information gain that measures how much the information entropy about uncertain quantity of interest θ is reduced on average by collecting relevant data Y. The expected information gain is expressed as a nested expectation, with an outer expectation with respect to Y and an inner expectation with respect to θ. The standard, nested Monte Carlo method requires a total computational cost of O(ε^-3) to achieve a root-mean-square accuracy of ε. In this paper we reduce this to optimal O(ε^-2) by applying a multilevel Monte Carlo (MLMC) method. More precisely, we introduce an antithetic MLMC estimator for the expected information gain and provide a sufficient condition on the data model under which the antithetic property of the MLMC estimator is well exploited such that optimal complexity of O(ε^-2) is achieved. Furthermore, we discuss how to incorporate importance sampling techniques within the MLMC estimator to avoid so-called arithmetic underflow. Numerical experiments show the considerable computational savings compared to the nested Monte Carlo method for a simple test case and a more realistic pharmacokinetic model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset