
Multilevel Adaptive Sparse Grid Quadrature for Monte Carlo models
Many problems require to approximate an expected value by some kind of M...
10/01/2018 ∙ by Sandra Döpking, et al. ∙ 0 ∙ shareread it

Density estimation by Randomized QuasiMonte Carlo
We consider the problem of estimating the density of a random variable X...
07/16/2018 ∙ by Amal Ben Abdellah, et al. ∙ 0 ∙ shareread it

A randomized Halton algorithm in R
Randomized quasiMonte Carlo (RQMC) sampling can bring orders of magnitu...
06/09/2017 ∙ by Art B. Owen, et al. ∙ 0 ∙ shareread it

Optimal fidelity multilevel Monte Carlo for quantification of uncertainty in simulations of cloud cavitation collapse
We quantify uncertainties in the location and magnitude of extreme press...
05/11/2017 ∙ by Jonas Šukys, et al. ∙ 0 ∙ shareread it

Reducing Reparameterization Gradient Variance
Optimization with noisy gradients has become ubiquitous in statistics an...
05/22/2017 ∙ by Andrew C. Miller, et al. ∙ 0 ∙ shareread it

Monte Carlo Sampling Bias in the Microwave Uncertainty Framework
Uncertainty propagation software can have unknown, inadvertent biases in...
02/15/2019 ∙ by Michael Frey, et al. ∙ 0 ∙ shareread it

Noise contrastive estimation: asymptotics, comparison with MCMLE
A statistical model is said to be unnormalised when its likelihood func...
01/31/2018 ∙ by Lionel RiouDurand, et al. ∙ 0 ∙ shareread it
A Generalized Framework for Approximate Control Variates
We describe and analyze a Monte Carlo (MC) sampling framework for accelerating the estimation of statistics of computationally expensive simulation models using an ensemble of models with lower cost. Our approach uses control variates, with unknown means that must be estimated from data, to reduce the variance in statistical estimators relative to MC. Our framework unifies existing multilevel, multiindex, and multifidelity MC algorithms and leads to new and more efficient sampling schemes. Our results indicate that the variance reduction achieved by existing algorithms that explicitly or implicitly estimate control means, such as multilevel MC and multifidelity MC, is limited to that of a single linear control variate with known mean regardless of the number of control variates. We show how to circumvent this limitation and derive a new family of schemes that make full use of all available information sources. In particular, we demonstrate that a significant gap can exist, of orders of magnitude in some cases, between the variance reduction achievable by current We also present initial sample allocation approaches for exploiting this gap, which yield the greatest benefit when augmenting the highfidelity model evaluations is impractical because, for instance, they arise from a legacy database. Several analytic examples and two PDE problems (viscous Burger's and steady state diffusion) are considered to demonstrate the methodology.
READ FULL TEXT