A useful rule for weighting uncorrelated estimates appears in an unpublished technical report (Owen and Zhou, 1999). This paper states and proves that result and discusses some mild generalizations.
The motivating context for the problem is adaptive importance sampling. There is an unknown quantity and we have unbiased and uncorrelated measurements of it, denoted for . In adaptive importance sampling, could be the result of an importance sampler chosen on the basis of the data that produced . The estimates are uncorrelated by construction but are not independent. The variance of is affected by the prior sample values. We have in mind a setting where each is based on the same number of function evaluations but we do not need to use that in our analysis. Our setup could also be reasonable in settings where evaluations are used to construct .
In an adaptive method with one pilot estimate and one final estimate, and is usually large. In other settings, each could be based on just one evaluation of the integrand (e.g., Ryu and Boyd (2014)) and then would typically be very large. In intermediate settings we might have something like estimates based on perhaps thousands of evaluations each. For instance, the cross-entropy method (De Boer et al., 2005) might be used this way.
We estimate by where . Then and to minimize we should take . The problem is that we do not know
. We might have unbiased estimatesof , but taking will yield an estimate that is no longer unbiased. The bias is potentially serious in rare event problems. When the ’th sample fails to include the rare event much or at all, then and are both likely to be small, whereas obtaining more than the expected number of rare events makes it more likely that both are large. That is, we anticipate a negative correlation between and . Then the small values of will be upweighted while the large ones will be downweighted resulting in a downward bias for . In estimating the probabilty of a rare event, we might even obtain making sample variances completely unusable. For background on importance sampling for rare events, see L’Ecuyer et al. (2009). The AMIS algorithm of Cornuet et al. (2012) uses a weighted combination of estimates with weights that are correlated with those estimates and that complicates even the task of proving consistency.
Suppose that for some unknown with . The value is a model for importance sampling where adaptation brings no benefit. The value is a model for a setting where importance sampling is working very well, roughly as well as quasi-Monte Carlo sampling (Dick and Pillichshammer, 2010). The optimal choice is . Not knowing the true we might take for some . The square root rule from the appendix of Owen and Zhou (1999) takes . It never has variance more than times that of the unknown best weighting rule when for some .
This paper is organized as follows. Section 2 presents our notation and some of the context from adaptive importance sampling. Section 3 states and proves our main result using two lemmas. Lemma 2 there has an inequality that we had originally described as ‘straightforward algebra’. Revisiting the problem now we find the inequality to be quite delicate. Section 4 includes some remarks on generalizations to settings where even better convergence rates, including exponential convergence, apply.
Step of our adaptive importance sampler generates data . We let denote the data from all prior steps, with being empty. We assume that our estimates satisfy
In Owen and Zhou (1999),
was an importance sampled estimate of an integral over the unit cube, sampling from a mixture of products of beta distributions whose parameters were tuned to the prior data in.
We combine the estimates via
where . Then so . For ,
The conditional variances are random. If stage of importance sampling has or more observations in it, then we can ordinarily construct conditionally unbiased variance estimates . That is,
We can thus combine results from the stages of the AIS algorithm and get an unbiased estimate of the variance. The underlying idea here is that is a martingale in (Williams, 1991). We will not make formal use of martingale arguments.
Now we introduce a model
for and a constant of proportionality . Our estimate is
The best choice is but is unknown. Our variance using is
and we measure the inefficiency of our choice by
If the variance of decays as and, knowing that, we use , then . For the pessimistic value , the variance decays at the usual Monte Carlo rate in the number of steps. For an optimistic value , the variance decays at the rate , slightly better than (any ) which holds for randomly shifted lattice rules applied to functions of bounded variation in the sense of Hardy and Krause. See L’Ecuyer and Lemieux (2000) for background on randomized lattice rules.
3 Square root rule
Owen and Zhou (1999) proposed to split the difference between the optimistic estimate and the pessimistic one by using . The result is an unbiased estimate that attains the same convergence rate as the unknown best estimate with only a modest penalty in the constant factor.
For given by equation (3),
If and then
Equation (4) shows that the square root rule has at most more variance than the unknown best rule. Equation (5) shows that the choice is the unique minimax optimal one, when . When then does not depend on and in that case, . Before proving the theorem, we establish two lemmas.
For given by equation (3), with and ,
The result holds trivially for because . Now suppose that . Then
for . Thus is strictly convex in for any when , and so .
By symmetry, . So if , then . The last step follows because and is a convex function of with its minimum at . This establishes the result for and a similar argument holds for . For , . ∎
We will use the following integral bounds, for integers . If , then
Equation (6) uses concavity of and is much sharper than the bound one gets by integrating over .
For given by equation (3), holds for any integer .
Let . Then
Now has and . Therefore, if ,
The numerator in (8) is while the denominator is
The numerator is larger than the denominator, establishing the theorem for . For we compute that for . ∎
Proof of Theorem 1.
Ignoring the factor and taking the derivative yields
The numerator in (9) is of the form where . Furthermore is a decreasing function of while is increasing and so the expression in (9) is negative. Therefore is a decreasing function of making for . This establishes (5) for and the case of is similar. ∎
We can generalize some aspects of the square root rule to other power laws. Suppose that the true rate parameter is known to satisfy for . We can then work with for . First we recall the definition of from (3):
For given by equation (3), with and ,
The proof is the same as for Lemma 1 because is convex in for any and any and . ∎
The inequalities in Lemma 2 are rather delicate and we have not extended them to the more general setting. Equation (10) is easy to evaluate for integer values of , and . For more general values, some sharper tools than the integral bounds in this paper are given by Burrows and Talbot (1984) who make a detailed study of sums of powers of the first natural numbers. We do see numerically that is nondecreasing with in every instance we have inspected. We can easily find the asymptotic inefficiency
In cases with and hence we get . For instance, an upper bound at the rate corresponding roughly to asymptotic accuracy of scrambled net integration (Owen, 1997) leads to and an asymptotic inefficiency of at most
There are problems in which the adaptive importance sampling error converges exponentially to zero. See for instance Kollman et al. (1999) as well as Kong and Spanier (2011). These examples involve particle transport problems through complicated media. It is reasonable to expect that each estimate will require a large number of observations and that will then be not too large.
Suppose that for some . Then the desired combination is
Not knowing we use
for some . If , then our inefficiency is
If then the first factor in the numerator is . It can be disastrously inefficient to use . Some safety is obtained in the limit where . That is, in that limit, one simply uses the final and presumably best estimate. Then
For instance, if the variance is halving at each iteration, then and then taking is inefficient by at most a factor of . Repeated ten-fold variance reductions correspond to and a limiting inefficiency of at most . The greatest inefficiency from using only the final iteration arises in the limit where the factor is . In this setting, the user is not getting a meaningful exponential convergence and even there the loss factor is at most and, as remarked above, that is not likely to be large.
This work was supported by the US NSF under grants DMS-1521145 and IIS-1837931.
- Burrows and Talbot (1984) Burrows, B. L. and Talbot, R. F. (1984). Sums of powers of integers. The American Mathematical Monthly, 91(7):394–403.
- Cornuet et al. (2012) Cornuet, J., Marin, J.-M., Mira, A., and Robert, C. P. (2012). Adaptive multiple importance sampling. Scandinavian Journal of Statistics, 39(4):798–812.
- De Boer et al. (2005) De Boer, P.-T., Kroese, D. P., Mannor, S., and Rubinstein, R. Y. (2005). A tutorial on the cross-entropy method. Annals of operations research, 134(1):19–67.
- Dick and Pillichshammer (2010) Dick, J. and Pillichshammer, F. (2010). Digital sequences, discrepancy and quasi-Monte Carlo integration. Cambridge University Press, Cambridge.
Kollman et al. (1999)
Kollman, C., Baggerly, K., Cox, D., and Picard, R. (1999).
Adaptive importance sampling on discrete Markov chains.
Annals of Applied Probability, pages 391–412.
- Kong and Spanier (2011) Kong, R. and Spanier, J. (2011). Geometric convergence of adaptive Monte Carlo algorithms for radiative transport problems based on importance sampling methods. Nuclear Science and Engineering, 168(3):197–225.
- L’Ecuyer and Lemieux (2000) L’Ecuyer, P. and Lemieux, C. (2000). Variance reduction via lattice rules. Management Science, 46(9):1214–1235.
- L’Ecuyer et al. (2009) L’Ecuyer, P., Mandjes, M., and Tuffin, B. (2009). Importance sampling and rare event simulation. In Rubino, G. and Tuffin, B., editors, Rare event simulation using Monte Carlo methods, pages 17–38. John Wiley & Sons, Chichester, UK.
- Owen (1997) Owen, A. B. (1997). Scrambled net variance for integrals of smooth functions. Annals of Statistics, 25(4):1541–1562.
- Owen and Zhou (1999) Owen, A. B. and Zhou, Y. (1999). Adaptive importance sampling by mixtures of products of beta distributions. Technical report, Stanford University.
- Ryu and Boyd (2014) Ryu, E. K. and Boyd, S. P. (2014). Adaptive importance sampling via stochastic convex programming. Technical report, arXiv:1412.4845.
- Williams (1991) Williams, D. (1991). Probability with martingales. Cambridge University Press, Cambridge.