1 Introduction
Monte Carlo methods are a pervasive approach to approximating complex integrals, which are widely deployed in all areas of science. Their widespread adoption has led to the development dozens of specialized Monte Carlo methods for any given task, each having their own tunable parameters. Consequently, it is usually difficult for a practitioner to know which approach and corresponding parameter setting might be most effective for a given problem.
In this paper we develop algorithms for sequentially allocating calls between a set of unbiased estimators to minimize the expected squared error (MSE) of a combined estimate. In particular, we formalize a new class of adaptive estimation problem:
learning to combine Monte Carlo estimators. In this scenario, one is given a set of Monte Carlo estimators that can each approximate the expectation of some function of interest. We assume initially that each estimator is unbiased but has unknownvariance. In practice, such estimators could include any unbiased method and/or variance reduction technique, such as unique instantiations of importance, stratified, or rejection sampling; antithetic variates; or control variates (Robert & Casella, 2005). The problem is to design a sequential allocation procedure that can interleave calls to the estimators and combine their outputs to produce a combined estimate whose MSE decreases as quickly as possible. To analyze the performance of such a metastrategy we formalize the notion of MSEregret: the timenormalized excess MSE of the combined estimate compared to the best estimator selected in hindsight, i.e., with knowledge of the distribution of estimates produced by each base estimator.Our first main contribution is to show that this metatask can be reduced to a stochastic multiarmed bandit problem, where bandit arms are identified with base estimators and the payoff of an arm is given by the negative square of its sampled estimate. In particular, we show that the MSEregret of any metastrategy is equal to its banditregret when the procedure is used to play in the corresponding bandit problem. As a consequence, we conclude that existing bandit algorithms, as well as their bounds on banditregret, can be immediately applied to achieve new results for adaptive Monte Carlo estimation. Although the underlying reduction is quite simple, the resulting adaptive allocation strategies provide novel alternatives to traditional adaptive Monte Carlo strategies, while providing strong finitesample performance guarantees.
Second, we consider a more general case where the alternative estimators require different (possibly random) costs to produce their sampled estimates. Here we develop a suitably designed bandit formulation that yields bounds on the MSEregret for costaware estimation. We develop new algorithms for this generalized form of adaptive Monte Carlo, provide explicit bounds on their MSEregret, and compare their performance to a stateoftheart adaptive Monte Carlo method. By instantiating a set of viable base estimators and selecting from them algorithmically, rather than tuning parameters manually, we discover that both computation and experimentation time can be reduced.
This work is closely related, and complementary to work on adaptive stratified sampling (Carpentier & Munos, 2011), where a strategy is designed to allocate samples between fixed strata to achieve MSEregret bounds relative to the best allocation proportion chosen in hindsight. Such work has since been extended to optimizing the number (Carpentier & Munos, 2012a) and structure (Carpentier & Munos, 2012b) of strata for differentiable functions. The method proposed in this paper, however, can be applied more broadly to any set of base estimation strategies and potentially even in combination with these approaches.
2 Background on Bandit Problems
The multiarmed bandit (MAB) problem is a sequential allocation task where an agent must choose an action at each step to maximize its long term payoff, when only the payoff of the selected action can be observed (CesaBianchi & Lugosi, 2006; Bubeck & CesaBianchi, 2012b). In the stochastic MAB problem (Robbins, 1952) the payoff for each action is assumed to be generated independently and identically (i.i.d.) from a fixed but unknown distribution . The performance of an allocation policy can then by analyzed by defining the cumulative regret for any sequence of actions, given by
(1) 
where
is the random variable giving the
th payoff of action , denotes the action taken by the policy at timestep , and denotes the number of times action is chosen by the policy up to time . Here, is the indicator function, set to 1 if the predicate is true, 0 otherwise. The objective of the agent is to maximize the total payoff, or equivalently to minimize the cumulative regret. By rearranging (1) and conditioning, the regret can be rewritten(2) 
where and .
The analysis of the stochastic MAB problem was pioneered by Lai & Robbins (1985) who showed that, when the payoff distributions are defined by a single parameter, the asymptotic regret of any subpolynomially consistent policy (i.e., a policy that selects nonoptimal actions only subpolynomially many times in the time horizon) is lower bounded by . In particular, for Bernoulli payoffs
(3) 
where and for . Lai & Robbins (1985) also presented an algorithm based on upper confidence bounds (UCB), which achieves a regret asymptotically matching the lower bound (for certain parametric distributions).
Later, Auer et al. (2002a) proposed UCB1 (Algorithm 1), which broadens the practical use of UCB by dropping the requirement that payoff distributions fit a particular parametric form. Instead, one need only make the much weaker assumption that the rewards are bounded; in particular, we let . Auer et al. (2002a) proved that, for any finite number of actions , UCB1’s regret is bounded by
(4) 
Various improvements of the UCB1 algorithm have since been proposed. One approach of particular interest is the UCBV algorithm (Audibert et al., 2009), which takes the empirical variances into account when constructing confidence bounds. Specifically, UCBV uses the bound
where denotes the empirical variance of arm ’s payoffs after pulls, , and is an exploration function required to be a nondecreasing function of or (typically for a fixed constant ). The UCBV procedure can then be constructed by substituting the above confidence bound into Algorithm 1, which yields a regret bound that scales with the true variance of each arm
(5) 
Here is a constant relating to and . In the worst case, when and , this bound is slightly worse than UCB1’s bound; however, it is usually better in practice, particularly if some has small and .
A more recent algorithm is KLUCB (Cappé et al., 2013), where the confidence bound for arm is based on solving
for a chosen increasing function , which can be solved efficiently since is smooth and increasing on . By choosing for (and ), KLUCB achieves a regret bound
(6) 
for , with explicit constants for the “higher order” terms (Cappé et al., 2013, Corollary 1). Apart from the higher order terms, this bound matches the lower bound (3). In general, KLUCB is expected to be better than UCBV except for large sample sizes and small variances. Note that, given any set of UCB algorithms, one can apply the tightest upper confidence from the set, via the union bound, at the price of a small additional constant in the regret.
Another approach that has received significant recent interest is Thompson sampling (TS)
(Thompson, 1933): a Bayesian method where actions are chosen randomly in proportion to the posterior probability that their mean payoff is optimal. TS is known to outperform UCBvariants when payoffs are Bernoulli distributed
(Chapelle & Li, 2011; May & Leslie, 2011). Indeed, the finite time regret of TS under Bernoulli payoff distributions closely matches the lower bound (3) (Kaufmann et al., 2012):for every , where is a problemdependant constant. However, since it is not possible to have Bernoulli distributed payoffs with the same mean and different variances, this analysis is not directly applicable to our setting. Instead, we consider a more general version of Thompson sampling (Agrawal & Goyal, 2012) that converts realvalued to Bernoullidistributed payoffs through a resampling step (Algorithm 2), which has been shown to obtain
(7) 
3 Combining Monte Carlo Estimators
We now formalize the main problem we consider in this paper. Assume we are given a finite number of Monte Carlo estimators, , where base estimator produces a sequence of realvalued random variables whose mean converges to the unknown target quantity, . Observations from the different estimators are assumed to be independent from each other. We assume, initially, that drawing a sample from each estimator takes constant time, hence the estimators differ only in terms of how fast their respective sample means converge to . The goal is to design a sequential estimation procedure that works in discrete time steps: For each round , based on the previous observations, the procedure selects one estimator , whose observation is used by an outer procedure to update the estimate based on the values observed so far.
As is common in the Monte Carlo literature, we evaluate accuracy by the meansquared error (MSE). That is, we define the loss of the sequential method at the end of round by . A reasonable goal is to then compare the loss, , of each base estimator to the loss of . In particular, we propose to evaluate the performance of by the (normalized) regret
which measures the excess loss of due to its initial ignorance of estimator quality. Implicit in this definition is the assumption that ’s time to select the next estimator is negligible compared to the time to draw an observation. Note also that the excess loss is multiplied by , which ensures that, in standard settings, when , a sublinear regret (i.e., as ) implies that the loss of asymptotically matches that of the best estimator.
In the next two sections we will adopt a simple strategy for combining the values returned from the base estimators: simply returns their (unweighted) average as the estimate of . A more sophisticated approach may be to weight each of these samples inversely proportional to their respective (sample) variances. However, if the adaptive procedure can quickly identify and ignore highly suboptimal arms the savings from the weighted estimator will diminish rapidly. Interestingly, this argument does not immediately translate to the nonuniform cost case considered in Section 5 as will be shown empirically in Section 6.
4 Combining Unbiased I.I.D. Estimators
Our main assumption in this section will be the following:
Assumption 4.1.
Each estimator produces a sequence of i.i.d. random observations with common mean and finite variance; values from different estimators are independent.
Let denote the distribution of samples from estimator . Note that completely determines the sequential estimation problem. Since the samples coming from estimator are i.i.d., we have . Let and . Furthermore, let , hence . We then have the first main result of this section.
Theorem 1 (Regret Identity).
Consider estimators for which Assumption 4.1 holds, and let be an arbitrary allocation procedure. Then, for any , the MSEregret of the estimation procedure , estimating using the samplemean of the observations obtained by , satisfies
(8) 
The proof follows from a simple calculation given in Appendix A. Essentially, one can rewrite the loss as , where is the centered sum of observations for arm . The crossterms can be shown to cancel by independence, and Wald’s second identity with some algebra gives the result.
The tight connection between sequential estimation and bandit problems revealed by (8) allows one to reduce sequential estimation to the design of bandit strategies and vice versa. Furthermore, regret bounds transfer both ways.
Theorem 2 (Reduction).
Let Assumption 4.1 hold for . Define a corresponding bandit problem by assigning as the distribution of . Given an arbitrary allocation strategy , let be the bandit strategy that consults to select the next arm after obtaining reward (assumed nonpositive), based on feeding observations to and copying ’s choices. Then, the banditregret of in bandit problem is the same as the MSEregret of in estimation problem . Conversely, given an arbitrary bandit strategy , let be the allocation strategy that consults to select the next estimator after observing , based on feeding rewards to and copying ’s choices. Then the MSEregret of in estimation problem is the same as the banditregret of in bandit problem (where uses the average of observations as its estimate).
Proof of Theorem 2.
The result follows from Theorem 1 since and where is the lowest variance estimator, hence . Furthermore, the bandit problem ensures the regret of a procedure that chooses arm times is , where . ∎
From this theorem one can also derive a lower bound. First, let denote the variance of and let . For a family of distributions over the reals, let , where , if the RadonNikodym derivative exists, and otherwise. Note that measures how distinguishable is from distributions in having smaller variance than . Further, we let denote the regret of on the estimation problem specified using the distributions .
Theorem 3 (MSERegret Lower Bound).
Let be the set of distributions supported on and assume that allocates a subpolynomial fraction to suboptimal estimators for any : i.e., for all and such that . Then, for any where not all variances are equal and holds whenever , we have
Using Theorem 2 we can also establish bounds on the MSEregret for the algorithms mentioned in Section 2.
Theorem 4 (MSERegret Upper Bounds).
Let Assumption 4.1 hold for where (for simplicity) we assume each is supported on . Then, after rounds, achieves the MSERegret bound of: (4) when using ; (5) when using with ; (6) when using ; and (7) when using .^{1}^{1}1 Note that to apply the regret bounds from Section 2, one has to feed the bandit algorithms with instead of in Theorem 2. This modification has no effect on the regret.
Additionally, due to Theorem 2, one can also obtain bounds on the minimax MSEregret by exploiting the lower bound for bandits in (Auer et al., 2002b). In particular, the UCBbased bandit algorithms above can all be shown to achieve the minimax rate up to logarithmic factors, immediately implying that the minimax MSEregret of for is of order .^{2}^{2}2 denotes the order up to logarithmic factors. To remove such factors one can exploit MOSS (Audibert & Bubeck, 2010).
provides a discussion of how alternative ranges on the observations can be handled, and how the above methods can still be applied when the payoff distribution is unbounded but satisfies moment conditions.
5 Nonuniform Estimator Costs
Next, we consider the case when the base estimators can take different amounts of time to generate observations. A consequence of nonuniform estimator times, which we refer to as nonuniform costs, is that the definitions of the loss and regret must be modified accordingly. Intuitively, if an estimator takes more time to produce an observation, it is less useful than another estimator that produces observations with (say) identical variance but in less time.
To develop an appropriate notion of regret for this case, we introduce some additional notation. Let denote the time needed by estimator to produce its th observation, . As before, we let denote the index of the estimator that chooses in round . Let denote the time when observes the th sample, ; thus, , , and . For convenience, define . Note that round starts at time with choosing an estimator, and finishes at time when the observation is received and (instantaneously) updates it estimate. Thus, at time a new estimate becomes available: the estimate is “renewed”. Let denote the estimate available at time . Assuming produces a default estimate before the first observation, we have on , on , etc. If denotes the round index at time (i.e., on , on , etc.) then . The MSE of at time is
By comparison, the estimate for a single estimator at time is , where on , on , etc. We set on to let be welldefined on . Thus, at time the MSE of estimator is
Given these definitions, it is natural to define the regret as
As before, the scaling is chosen so that, under the condition that , a sublinear regret implies that is “learning”. Note that this definition generalizes the previous one: if , , then .
In this section we make the following assumption.
Assumption 5.1.
For each , is an i.i.d. sequence such that , , and . Furthermore, we assume that the sequences for different are independent of each other.
Note that Assumption 5.1 allows to be a deterministic value; a case that holds when the estimators use deterministic algorithms to produce observations. Another situation arises when is stochastic (i.e., the estimator uses a randomized algorithm) and are correlated. In which case may be a biased estimate of . However, if and are independent and then is unbiased. Indeed, in such a case, is independent of the partial sums , hence , because when .
Using Assumption 5.1, a standard argument of renewal reward processes gives , where means . Intuitively, estimator will produce approximately independent observations during ; hence, the variance of their average is approximately (made more precise in the proof of Theorem 5). This implies . Thus, any allocation strategy competing with the best estimator must draw most of its observations from satisfying . For simplicity, we assume is unique, with and .
As before, we will consider adaptive strategies that estimate using the mean of the observations: such that . Hence, the estimate at time is
(9) 
Our aim is to bound regret of the overall algorithm by bounding the number of times the allocation strategy chooses suboptimal estimators. We will do so by generalizing (2) to the nonuniformcost case, but unlike the equality obtained in Theorem 1, here we provide an upper bound.
Theorem 5.
Let Assumption 5.1 hold and assume that are bounded and is unique. Let the estimate of at time be defined by the sample mean . Let be such that and , and assume that for any , for some such that for some and any . Assume furthermore that and for all . Then, for any where , the regret of at time is bounded by
(10) 
for some appropriate constants that depend on the problem parameters , the upper bound on , and the constants , and .
The proof of the theorem is given in Appendix C. Several comments are in order. First, recall that the optimal regret rate is order in this setting, up to logarithmic factors. To obtain such a rate, one need only achieve , which can be attained by stochastic or even adversarial bandit algorithms (Bubeck & CesaBianchi, 2012b) receiving rewards with expectation and a wellconcentrated number of samples. The moment condition on
is also not restrictive; for example, if the estimators are rejection samplers, their sampling times will have a geometric distribution that satisfies the polynomial tail condition. Furthermore, if
for some then for all , which ensures the moment condition on .Although it was sufficient to use the negative second moment instead of variance as the bandit reward under uniform costs, this simplification is no longer possible when costs are nonuniform, since now involves the unknown expectation . Several strategies can be followed to bypass this difficulty. For example, given independent costs and observations, one can use each bandit algorithm decision twice, feeding rewards whose expectation is . Similar constructions using slightly more data can be used for the dependent case.
Note that ensuring can be nontrivial. Typical guarantees for UCBtype algorithms ensure that the expected number of pulls to a suboptimal arm in rounds is bounded by a function . However, due to their dependence, cannot generally be bounded by . Nevertheless, if, for example, for some , then , hence can be used.
Finally, we need to ensure that the last two terms in (10) remain small, which follows if and concentrate around their means. In general, for some constant , therefore can be chosen to achieve , hence by chosing to be we achieve regret. However, to ensure concentration, the allocation strategy must also select the optimal estimator most of the time. For example, Audibert et al. (2009) show that with default parameters, UCB1 and UCBV will select suboptimal arms with probability , making . However, by increasing the constant in UCB1 and the parameter in UCBV, it follows from (Audibert et al., 2009) that the chance of using any suboptimal arm more than times can be made smaller than (where is some problemdependent constant). Outside of this small probability event, the optimal arm is used times, which is sufficient to show concentration of . In summary, we conclude that regret can be achieved in Theorem 5 under reasonable assumptions.
6 Experiments
We conduct experimental investigations in a number of scenarios to better understand the effectiveness of multiarmed bandit algorithms for adaptive Monte Carlo estimation.
6.1 Preliminary Investigation: A 2Estimator Problem
We first consider the performance of allocation strategies on a simple 2estimator problem. Note that this evaluation differs from standard evaluations of stochastic bandits through the absence of singleparameter payoff distributions, such as the Bernoulli, which cannot have identical means yet different variances. This is an important detail, since stochastic bandit algorithms such as KLUCB and TS are often evaluated on singleparameter payoff distributions, but their advantages in such scenarios might not extend to adaptive Monte Carlo estimation.
In particular, we consider problems when , where is standard Bernoulli and is a separate scale parameter for . This design permits the maximum range for variance around a mean within a bounded interval. We evaluated the four bandit strategies detailed in Section 2: UCB1, UCBV, KLUCB, and TS, where for UCBV we used the same settings as (Audibert et al., 2009), and for TS we used the uniform Beta prior, i.e., and .
The relative performance of these approaches is reported in Figure 1. TS appears best suited for scenarios where either estimator has high variance, whereas UCBV is more effective when faced with medium or low variance estimators. Additionally, KLUCB outperforms UCBV in high variance settings, but in all such cases was eclipsed by TS.
6.2 Option Pricing
We next consider a more practical application of adaptive Monte Carlo estimation to the problem of pricing financial instruments. In particular, following (Douc et al., 2007; Arouna, 2004), we consider the problem of pricing European call options under the assumption that the interest rate evolves in time according to the CoxIngersollRoss (CIR) model (Cox et al., 1985), a popular model in mathematical finance (details provided in Appendix F). In a nutshell, this model assumes that the interest rate , as a function of time , follows a square root diffusion model. The price of a European caplet option with “strike price” , “nominee amount” and “maturity” is then given by . The problem is to determine the expected value of .
A naive approach to estimating is to simulate independent realizations of for . However, any simulation where the interest rate lands below the strike price can be ignored since the payoff is zero. Therefore, a common estimation strategy is to use importance sampling by introducing a “drift” parameter into the proposal density for , with meaning no drift; this encourages simulations with higher interest rates. The importance weights for these simulations can then be efficiently calculated as a function of (see Appendix F).
Importantly, the task of adaptively allocating trials between different importance sampling estimators has been previously studied on this problem, using an unrelated technique known as dkernel Population Monte Carlo (PMC) (Douc et al., 2007). Space restrictions prevent us from providing a full description of the PMC method, but, roughly speaking, the method defines the proposal density as mixture over the set of drift parameters considered. At each time step, PMC samples a new drift value according to this mixture and then simulates an interest rate. After a fixed number of samples, say (the population size), the mixture coefficient of each drift parameter is adjusted by setting it to be proportional to the sum of importance weights sampled from that parameter: . The new proposal is then used to generate the next population.
We approximated the option prices under the same parameter settings as (Douc et al., 2007), namely, , , , , , , and , for different strike prices (see Appendix F). However, we consider a wider set of proposals given by for . The results averaged over 1000 simulations are given in Figure 2.
These results generally indicate that the more effective bandit approaches are significantly better suited to this allocation task than the PMC approach, particularly in the longer term. Among the bandit based strategies, TS is the clear winner, which, given the conclusions from the previous experiment, is likely due to high level of variance introduced by the option pricing formula. Despite this strong showing for the bandit methods, PMC remains surprisingly competitive at this task, doing uniformly better than UCB, and better than all other bandit allocation strategies early on for . However, we believe that this advantage of PMC stems from the fact that PMC explore the entire space of mixture distribution (rather than single ). It remains an interesting area for future work in banditbased allocation strategies to extend the existing methods to continuously parameterized settings.
6.3 Adaptive Annealed Importance Sampling
Many important applications of Monte Carlo estimation occur in Bayesian inference, where a particularly challenging problem is evaluating the
model evidence of a latent variable model. Evaluating such quantities is useful for a variety of purposes, such as Bayesian model comparison and testing/training set evaluation (Robert, 2012). However, the desired quantities are notoriously difficult to estimate in many important settings, due in no small part to the fact that popular highdimensional Monte Carlo strategies, such as Markov Chain Monte Carlo (MCMC) methods, cannot be directly applied
(Neal, 2005).Nevertheless, a popular approach for approximating such values is annealed importance sampling (AIS) (Neal, 2001) (or more generally sequential Monte Carlo samplers (Del Moral et al., 2006)). In a nutshell, AIS combines the advantages of importance sampling with MCMC by defining a proposal density through a sequence of MCMC transitions applied to a sequence of annealed distributions, which slowly blend between the proposal (prior) and the target (unnormalized posterior). While such a technique can offer impressive practical advantages, it often requires considerable effort to set parameters; in particular, the practitioner must specify the number of annealing steps, the annealing rate or “schedule”, the underlying MCMC method (and its parameters), and the number of MCMC transitions to execute at annealing step. Even when these parameters have been appropriately tuned on preliminary data, there is no assurance that these choices will remain effective when deployed on larger or slightly different data sets.
Here we consider the problem of approximating the normalization constant for a Bayesian logistic regression model on different sized subsets of the 8dimensional Pima Indian diabetes UCI data set (Bache & Lichman, 2013). We consider the problem of allocating resources between three AIS estimators that differ only in the number of annealing steps they use; namely, 400, 2000, and 8000 steps. In each case, we fix the annealing schedule using the power of 4heuristic suggested by (Kuss & Rasmussen, 2005), with a single slice sampling MCMC transition used at each step (Neal, 2003) (this entails one “step” in each of the 8 dimensions); see Appendix G for further details.
A key challenge in this scenario is that the computational costs associated with each arm differ substantially, and, because slice sampling uses an internal rejection sampler, these costs are stochastic. To account for these costs we directly use elapsed CPUtime when drawing a sample from each estimator, as reported by the java vm. This choice reflects the true underlying cost and is particularly convenient since it does not require the practitioner to implement special accounting functionality. Since we do not expect this cost to correlate with the sample returns, we use the independent costs payoff formulation from Section 5: .
The results for the different allocation strategies for training sets of size 5, 10, and 50 are shown in Figure 3. Perhaps the most striking result is the performance improvement achieved by the nonuniformly combined estimators, which are indicated by the dashed lines. These estimators do not change the underlying allocation; instead they improve the final combined estimate by weighting each observation inversely proportional to the sample variance of the estimator that produced it. This performance improvement is an artifact of the nonuniform cost setting, since arms that are very close in terms of can still have considerably different variances, which is especially true for AIS. Also observe that no one arm is optimal for all three training set sizes, consequently, we can see that bandit allocation (Thompson sampling in particular) is able to outperform any static strategy. In practice, this implies that even after exhaustive parameter tuning, automated adaptation can still offer considerable benefits simply due to changes in the data.
7 Conclusion
In this paper we have introduced a new sequential decision making strategy for competing with the best consistent Monte Carlo estimator in a finite pool. When each base estimator produces unbiased values at the same cost, we have shown that the sequential estimation problem maps to a corresponding bandit problem, allowing future improvements in bandit algorithms to be transfered to combining unbiased estimators. We have also shown a weaker reduction for problems where the different estimators take different (possibly random) time to produce an observation.
We expect this work to inspire further research in the area. For example, one may consider combining not only finitely many, but infinitely many estimators using appropriate bandit techniques (Bubeck et al., 2011), and/or exploit the fact that the observation from one estimator may reveal information about the variance of others. This is the case for example when the samplers use importance sampling, leading to the (new) stochastic variant of the problem known as “bandits with sideobservations” (Mannor & Shamir, 2011; Alon et al., 2013). However, much work remains to be done, such studying in detail the use variance weighted estimators, dealing with continuous families of estimators, or a more thorough empirical investigation of the alternatives available.
Acknowledgements
This work was supported by the Alberta Innovates Technology Futures and NSERC. Part of this work was done while Cs.Sz. was visiting Technion, Haifa and Microsoft Research, Redmond, whose support and hospitality are greatly acknowledged.
References
 Agrawal & Goyal (2012) Agrawal, S. and Goyal, N. Analysis of Thompson sampling for the multiarmed bandit problem. In Conference on Learning Theory, pp. 39.1–39.26, 2012.
 Alon et al. (2013) Alon, N., CesaBianchi, N., Gentile, C., and Mansour, Y. From bandits to experts: A tale of domination and independence. NIPS, pp. 1610–1618, 2013.
 Arouna (2004) Arouna, B. Adaptive Monte Carlo technique, a variance reduction technique. Monte Carlo Methods and Applications, 2004.

Audibert & Bubeck (2010)
Audibert, J. and Bubeck, S.
Regret bounds and minimax policies under partial monitoring.
Journal of Machine Learning Research
, 11:2785–2836, 2010.  Audibert et al. (2009) Audibert, J., Munos, R., and Szepesvári, Cs. Exploration–exploitation tradeoff using variance estimates in multiarmed bandits. Theoretical Computer Science, 410(19):1876–1902, 2009.
 Auer et al. (2002a) Auer, P., CesaBianchi, N., and Fischer, P. Finitetime analysis of the multiarmed bandit problem. Machine Learning, 47:235–256, 2002a.
 Auer et al. (2002b) Auer, P., CesaBianchi, N., Freund, Y., and Schapire, R. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32:48–77, 2002b.
 Bache & Lichman (2013) Bache, K. and Lichman, M. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml.
 Bubeck & CesaBianchi (2012a) Bubeck, S. and CesaBianchi, N. Regret analysis of stochastic and nonstochastic multiarmed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122, 2012a.
 Bubeck & CesaBianchi (2012b) Bubeck, S. and CesaBianchi, N. Regret analysis of stochastic and nonstochastic multiarmed bandit problems. arXiv preprint arXiv:1204.5721, 2012b.
 Bubeck et al. (2011) Bubeck, S., Munos, R., Stoltz, G., and Szepesvári, Cs. Xarmed bandits. Journal of Machine Learning Research, 12:1655–1695, June 2011.
 Bubeck et al. (2013) Bubeck, S., CesaBianchi, N., and Lugosi, G. Bandits with heavy tail. IEEE Transactions on Information Theory, 59(11):7711–7717, 2013.
 Burnetas & Katehakis (1996) Burnetas, A. and Katehakis, M. Optimal adaptive policies for sequential allocation problems. Advances in Applied Mathematics, 17:122–142, 1996.
 Cappé et al. (2013) Cappé, O., Garivier, A., Maillard, O., Munos, R., and Stoltz, G. KullbackLeibler upper confidence bounds for optimal sequential decision making. Annals of Statistics, 41(3):1516–1541, 2013.
 Carpentier & Munos (2011) Carpentier, A. and Munos, R. Finite time analysis of stratified sampling for Monte Carlo. In NIPS24, pp. 1278–1286, 2011.
 Carpentier & Munos (2012a) Carpentier, A. and Munos, R. Minimax number of strata for online stratified sampling given noisy samples. In Algorithmic Learning Theory, pp. 229–244, 2012a.
 Carpentier & Munos (2012b) Carpentier, A. and Munos, R. Adaptive stratified sampling for MonteCarlo integration of differentiable functions. In NIPS, pp. 251–259, 2012b.
 CesaBianchi & Lugosi (2006) CesaBianchi, N. and Lugosi, G. Prediction, Learning, and Games. Cambridge University Press, 2006.
 Chapelle & Li (2011) Chapelle, O. and Li, L. An empirical evaluation of Thompson sampling. In NIPS24, pp. 2249–2257, 2011.
 Cox et al. (1985) Cox, J., Ingersoll Jr, J., and Ross, S. A theory of the term structure of interest rates. Econometrica: Journal of the Econometric Society, pp. 385–407, 1985.
 Del Moral et al. (2006) Del Moral, P., Doucet, A., and Jasra, A. Sequential Monte Carlo samplers. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68:411–436, 2006.
 Doob (1953) Doob, J.L. Stochastic Processes. Wiley, 1953.
 Douc et al. (2007) Douc, R., Guillin, A., Marin, J., and Robert, C. Minimum variance importance sampling via population Monte Carlo. ESAIM: Probability and Statistics, 11:427–447, 2007.
 Garivier (2013) Garivier, A. Informational confidence bounds for selfnormalized averages and applications. IEEE Information Theory Workshop p.489493, 2013.
 Glasserman (2003) Glasserman, P. Monte Carlo Methods in Financial Engineering. Springer, 2003.
 Gut (2005) Gut, A. Probability: a graduate course? Springer, 2005.
 Kaufmann et al. (2012) Kaufmann, E., Korda, N., and Munos, R. Thompson sampling: An asymptotically optimal finite time analysis. In Algorithmic Learning Theory, pp. 199–213, 2012.
 Kuss & Rasmussen (2005) Kuss, M. and Rasmussen, C. Assessing approximate inference for binary gaussian process classification. The Journal of Machine Learning Research, 6:1679–1704, 2005.
 Lai & Robbins (1985) Lai, T. and Robbins, H. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4–22, 1985.
 Mannor & Shamir (2011) Mannor, S. and Shamir, O. From bandits to experts: On the value of sideobservations. In NIPS, pp. 684–692, 2011.
 May & Leslie (2011) May, B. and Leslie, D. Simulation studies in optimistic Bayesian sampling in contextualbandit problems. Technical report, 11: 02, Statistics Group, Department of Mathematics, University of Bristol, 2011.
 Neal (2001) Neal, R. Annealed importance sampling. Technical report, University of Toronto, 2001.
 Neal (2003) Neal, R. Slice sampling. Annals of statistics, pp. 705–741, 2003.
 Neal (2005) Neal, R. Estimating ratios of normalizing constants using linked importance sampling. Technical report, University of Toronto, 2005.
 Neal (2011) Neal, R. Handbook of Markov Chain Monte Carlo, chapter MCMC using Hamiltonian dynamics, pp. 113–162. Chapman & Hall / CRC Press, 2011.
 Robbins (1952) Robbins, H. Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, 58:527–535, 1952.
 Robert (2012) Robert, C. Bayesian computational methods. Springer, 2012.
 Robert & Casella (2005) Robert, C. and Casella, G. Monte Carlo Statistical Methods. SpringerVerlag New York, 2005.
 Thompson (1933) Thompson, W. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933.
Appendix A Proofs for Section 4
In this section we provide the proof of Theorem 1. For the proof, we will need the following two lemmas:
Lemma 1 (Optional Sampling).
Let be a sequence of i.i.d. random variables, and be its subsequence such that the decision whether to include in the subsequence is independent of future values in the sequence, i.e., for . Then the sequence is an i.i.d. sequence with the same distribution as .
Proof.
See Theorem 5.2 in Chapter III on page 145 of (Doob, 1953). ∎
We also need Wald’s second identity:
Lemma 2 (Wald’s Second Identity).
Let be a sequence of adapted random variables such that and for any . Let be the partial sum of the first random variables () and be some stopping time w.r.t. .^{3}^{3}3 is called a stopping time w.r.t. if for all . Then,
Proof.
See Theorem 14.3 of (Gut, 2005). ∎
Now, let us return to the proof of Theorem 1. Let denote the choice that made at the beginning of round and recall that . The observation at the end of round is and the cumulative sum of returns after rounds for arm is . Likewise, . By definition, the estimate after rounds is
Since the variance of this estimate does not depend on the mean, without loss of generality we may consider the case when , giving
(11) 
We may then observe that for any ,
By considering the conditional expectation w.r.t. the history up to (including ), i.e., w.r.t.