Double-loop quasi-Monte Carlo estimator for nested integration

02/27/2023
by   Arved Bartuska, et al.
0

Nested integration arises when a nonlinear function is applied to an integrand, and the result is integrated again, which is common in engineering problems, such as optimal experimental design, where typically neither integral has a closed-form expression. Using the Monte Carlo method to approximate both integrals leads to a double-loop Monte Carlo estimator, which is often prohibitively expensive, as the estimation of the outer integral has bias relative to the variance of the inner integrand. For the case where the inner integrand is only approximately given, additional bias is added to the estimation of the outer integral. Variance reduction methods, such as importance sampling, have been used successfully to make computations more affordable. Furthermore, random samples can be replaced with deterministic low-discrepancy sequences, leading to quasi-Monte Carlo techniques. Randomizing the low-discrepancy sequences simplifies the error analysis of the proposed double-loop quasi-Monte Carlo estimator. To our knowledge, no comprehensive error analysis exists yet for truly nested randomized quasi-Monte Carlo estimation (i.e., for estimators with low-discrepancy sequences for both estimations). We derive asymptotic error bounds and a method to obtain the optimal number of samples for both integral approximations. Then, we demonstrate the computational savings of this approach compared to standard nested (i.e., double-loop) Monte Carlo integration when estimating the expected information gain via two examples from Bayesian optimal experimental design, the latter of which involves an experiment from solid mechanics.

READ FULL TEXT

page 17

page 18

research
01/21/2019

A weighted Discrepancy Bound of quasi-Monte Carlo Importance Sampling

Importance sampling Monte-Carlo methods are widely used for the approxim...
research
12/13/2021

Small-noise approximation for Bayesian optimal experimental design with nuisance uncertainty

Calculating the expected information gain in optimal Bayesian experiment...
research
01/18/2023

Meta variance reduction for Monte Carlo estimation of energetic particle confinement during stellarator optimization

This work introduces meta estimators that combine multiple multifidelity...
research
06/07/2022

Monte Carlo integration with adaptive variance reduction: an asymptotic analysis

The crude Monte Carlo approximates the integral S(f)=∫_a^b f(x) dx ...
research
10/18/2020

Creative Telescoping on Multiple Sums

We showcase a collection of practical strategies to deal with a problem ...
research
01/16/2013

Computational Investigation of Low-Discrepancy Sequences in Simulation Algorithms for Bayesian Networks

Monte Carlo sampling has become a major vehicle for approximate inferenc...
research
11/14/2022

Regression-based Monte Carlo Integration

Monte Carlo integration is typically interpreted as an estimator of the ...

Please sign up or login with your details

Forgot password? Click here to reset