The square root rule for adaptive importance sampling

01/10/2019
by   Art B. Owen, et al.
0

In adaptive importance sampling, and other contexts, we have unbiased and uncorrelated estimates of a common quantity μ and the variance of the k'th estimate is thought to decay like k^-y for an unknown rate parameter y∈ [0,1]. If we combine the estimates as though y=1/2, then the resulting estimate attains the optimal variance rate with a constant that is too large by at most 9/8 for any 0< y< 1 and any number K of estimates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/29/2021

Adaptive Importance Sampling meets Mirror Descent: a Bias-variance tradeoff

Adaptive importance sampling is a widely spread Monte Carlo technique th...
research
06/05/2023

DISCount: Counting in Large Image Collections with Detector-Based Importance Sampling

Many modern applications use computer vision to detect and count objects...
research
09/13/2021

Low-Shot Validation: Active Importance Sampling for Estimating Classifier Performance on Rare Categories

For machine learning models trained with limited labeled training data, ...
research
05/20/2022

Adaptive Bayesian Inference of Markov Transition Rates

Optimal designs minimize the number of experimental runs (samples) neede...
research
03/26/2019

A layered multiple importance sampling scheme for focused optimal Bayesian experimental design

We develop a new computational approach for "focused" optimal Bayesian e...
research
09/09/2021

Adaptive importance sampling for seismic fragility curve estimation

As part of Probabilistic Risk Assessment studies, it is necessary to stu...

Please sign up or login with your details

Forgot password? Click here to reset