Bandits with heavy tail

09/08/2012 ∙ by Sébastien Bubeck, et al. ∙ 0

The stochastic multi-armed bandit problem is well understood when the reward distributions are sub-Gaussian. In this paper we examine the bandit problem under the weaker assumption that the distributions have moments of order 1+ϵ, for some ϵ∈ (0,1]. Surprisingly, moments of order 2 (i.e., finite variance) are sufficient to obtain regret bounds of the same order as under sub-Gaussian reward distributions. In order to achieve such regret, we define sampling strategies based on refined estimators of the mean such as the truncated empirical mean, Catoni's M-estimator, and the median-of-means estimator. We also derive matching lower bounds that also show that the best achievable regret deteriorates when ϵ <1.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper we investigate the classical stochastic multi-armed bandit problem introduced by Robbins (1952) and described as follows: an agent facing actions (or bandit arms) selects one arm at every time step. With each arm

there is an associated probability distribution

with finite mean . These distributions are unknown to the agent. At each round , the agent chooses an arm , and observes a reward drawn from independently from the past given . The goal of the agent is to minimize the regret

We refer the reader to Bubeck and Cesa-Bianchi (2012) for a survey of the extensive literature of this problem and its variations. The vast majority of authors assume that the unknown distributions

are sub-Gaussian, that is, the moment generating function of each

is such that if

is a random variable drawn according to the distribution

, then for all ,

(1)

where , the so-called “variance factor” is a parameter that is usually assumed to be known. In particular, if rewards take values in , then by Hoeffding’s lemma, one may take . Similarly to the asymptotic bound of (Agrawal, 1995, Theorem 4.10), this moment assumption was generalized in (Bubeck and Cesa-Bianchi, 2012, Chapter 2) by assuming that there exists a convex function such that, for all ,

(2)

Then one can show that the so-called -UCB strategy (a variant of the basic UCB strategy of Auer et al. (2002)) satisfies the following regret guarantee. Let , and the Legendre-Fenchel transform of , defined by

Then -UCB444More precisely, -UCB with . satisfies

In particular, when the reward distributions are sub-Gaussian, the regret bound is of the order , which is known to be optimal even for bounded reward distributions, see Auer et al. (2002).

While this result shows that assumptions weaker than sub-Gaussian distributions may suffice for a logarithmic regret, it still requires the distributions to have finite moment generating function. Another disadvantage of the bound above is that the dependence on the gaps

deteriorates as the tail of the distributions become heavier. In fact, as we show it in this paper, the bound is sub-optimal when the tails are heavier than sub-Gaussian.

In this paper we investigate the behavior of the regret when the distributions are heavy-tailed, and might not have a finite moment generating function. We show that under significantly weaker assumptions, regret bounds of the same form as in the sub-Gaussian case may be achieved. In fact, the only condition we need is that the reward distributions have a finite variance. Moreover, even if the variance is infinite but the distributions have finite moments of order for some , one may still achieve a regret logarithmic in the number of rounds though the dependency on the ’s worsens as gets smaller. For instance, for distributions with moment of order bounded by we derive a strategy that satisfies

The key to this result is to replace the empirical mean by more refined robust estimators of the mean and construct “upper confidence bound” strategies.

We also prove matching lower bounds that show that the proposed strategies are optimal up to constant factors. In particular the dependency in is unavoidable.

In the following we start by defining a general class of sampling strategies that are based on the availability of estimators of the mean with certain performance guarantees. Then we examine various estimators for the mean. For each estimator we describe their performance (in terms of concentration to the mean) and deduce the corresponding regret bound.

2 Robust upper confidence bound strategies

The rough idea behind upper confidence bound (ucb) strategies (see Lai and Robbins (1985), Agrawal (1995), Auer et al. (2002)

) is that one should choose an arm for which the sum of its estimated mean and a confidence interval is highest. When the reward distributions all satisfy the sub-Gaussian condition (

1) for a common variance factor , then such a confidence interval is easy to obtain. Suppose that at a certain time instance arm has been sampled times and the observed rewards are . Then the , are i.i.d. random variables with mean and by a simple Chernoff bound, for any , the empirical mean satisfies, with probability at least ,

This property of the empirical mean turns out to be crucial in order to achieve a regret of optimal order. However, when the sub-Gaussian assumption does not hold, one cannot expect the empirical mean to have such an accuracy. In fact, if one only knows, say, that the variance of each is bounded, then the best possible confidence intervals are significantly wider, deteriorating the performance of standard ucb strategies. (See Appendix A for properties of the empirical mean under distributions of heavy tails.)

The key to successful handling heavy-tailed reward distributions is to replace the empirical mean with other, more robust, estimators of the mean. All we need is a performance guarantee like the one shown above for the empirical mean. More precisely, we need a mean estimator with the following property.

Assumption 1

Let be a positive parameter and let be positive constants. Let be i.i.d. random variables with finite mean . Suppose that for all there exists an estimator such that, with probability at least ,

and also, with probability at least ,

For example, if the distribution of the satisfies the sub-Gaussian condition (1), then Assumption 1 is satisfied for , , and variance factor . Interestingly, the assumption may be satisfied for significantly more general distributions by using more sophisticated mean estimators. We recall some of these estimators in the following subsections, where we also show how they satisfy Assumption 1. As we shall see, the basic requirement for Assumption 1 to be satisfied is that the distribution of the has a finite moment of order .

We are now ready to define our generalized robust ucb strategy, described in Figure 1. We denote by the (random) number of times arm is selected up to time .

Robust UCB: Parameter: , mean estimator . For arm , define as the estimate based on the first observed values of the rewards of arm . Define the index

for and . At time , draw an arm maximizing .

Figure 1: Robust ucb policy.

The following proposition gives a performance bound for the robust ucb policy provided that the reward distributions and the mean estimator used by the policy jointly satisfy Assumption 1. Below we exhibit several mean estimators that, under various moment assumptions, lead to regret bounds of optimal order.

Proposition 1

Let and let be a mean estimator. Suppose that the distributions are such that the mean estimator satisfies Assumption 1 for all . Then the regret of the Robust ucb policy satisfies

(3)

Also, if is such that , then

(4)

Note that a regret of at least is suffered by any strategy that pulls each arm at least once. Thus, the interesting term in (3) is the one of the order of . We show below in Theorem 2 that this term is of optimal order under a moment assumption on the reward distributions. We also show in Theorem 2 that the gap-independent inequality (4) is optimal up to a logarithmic factor.

Proof. Both proofs of (3) and (4) rely on bounding the expected number of pulls for a suboptimal arm. More precisely, in the first two steps of the proof we prove that, for any such that ,

(5)

To lighten notation, we introduce . Note that, up to rounding, (5) is equivalent to .

First step.
We show that if , then one the following three inequalities is true: either

(6)

or

(7)

or

(8)

Indeed, assume that all three inequalities are false. Then we have

which implies, in particular, that .

Second step.
Here we first bound the probability that (6) or (7) hold. By Assumption 1 as well as an union bound over the value of and we obtain

Now using the first step, we obtain

This concludes the proof of (5).

Third step.
Using that and (5), we directly obtain (3). On the other hand, for (4) we use Hölder’s inequality to obtain

(by assumption on )
(by Hölder’s inequality)

In the next sections we show how Proposition 1 may be applied, with different mean estimators, to obtain optimal regret bounds for possibly heavy-tailed reward distributions.

2.1 Truncated empirical mean

In this section we consider the simplest of the proposed mean estimators, a truncated version of the empirical mean. This estimator is similar to the “winsorized mean” and “trimmed mean” of Tukey, see Bickel (1965).

The following lemma shows that if the -th raw moment is bounded, then the truncated mean satisfies Assumption 1.

Lemma 1

Let , and . Consider the truncated empirical mean defined as

If , then with probability at least ,

Proof. Let . From Bernstein’s inequality for bounded random variables, noting that , we have, with probability at least ,

An easy computation concludes the proof.

The following is now a straightforward corollary of Proposition 1 and Lemma 1.

Theorem 1

Let and . Assume that the reward distributions satisfy

(9)

Then the regret of the Robust-ucb policy used with the truncated mean estimator defined above satisfies

When , the only assumption of the theorem above is that each reward distribution has a finite variance. In this case the obtained regret bound is of the order of , which is known to be not improvable in general, even when the rewards are bounded —note, however, that the kl-ucb algorithm of Garivier and Cappé (2011) is never worse than Robust-ucb in case of bounded rewards. We find it remarkable that regret of this order may be achieved under the only assumption of finite variance and one cannot improve the order by imposing stronger tail conditions.

When the variance is infinite but moments of order are available, we still have a regret that depends only logarithmically on . The bound deteriorates slightly as the dependency on is replaced by . We show next that this dependency is inevitable.

Theorem 2

For any , there exist two distributions and satisfying (9) with and with , such that the following holds. Consider an algorithm such that for any two-armed bandit problem satisfying (9) with and with arm 2 being suboptimal, one has for any . Then on the two-armed bandit problem with distributions and , the algorithm satisfies

(10)

Furthermore, for any fixed , there exists a set of distributions satisfying (9) with and such that for any algorithm, one has

(11)

Proof. To prove (10), we take with , and . It is easy to see that and are well defined, and they satisfy (9) with and

. Now clearly, the two-armed bandit problem with these two distributions is equivalent to the two-armed bandit problem with two Bernoulli distributions with parameters respectively

and . Slightly more formally, we could define a new algorithm that on behaves equivalently to the original algorithm on and . Therefore, we can use (Bubeck, 2010, Theorem 2.7) to directly obtain the following lower bound for ,

where

denotes Kullback-Leibler divergence. This implies the following lower bound for the original algorithm

Equation (10) then follows directly by using along with straightforward computations.

The proof of (11) follows the same scheme. We use the same distributions as above and we consider the multi-armed bandit problem where one arm has distribution , and the remaining arms have distribution . Furthermore we set for this part of the proof. Now we can use the same proof as for (Bubeck, 2010, Theorem 2.6) on the modified algorithm that runs on the Bernoulli distributions corresponding to and . We leave the straightforward details to the reader.

2.2 Median of means

The truncated mean estimator and the corresponding bandit strategy are not entirely satisfactory as they are not translation invariant in the sense that the arms selected by the strategy may change if all reward distributions are shifted by the same constant amount. The reason for this is that the truncation is centered, quite arbitrarily, around zero. If the raw moments are small, then the strategy has a small regret. However, it would be more desirable to have a regret bound in terms of the centered moments . This is indeed possible if one replaces the truncated mean estimator by more sophisticated estimators of the mean. We show one such possibility, the “median-of-means” estimator in this section. In the next section we discuss Catoni’s -estimator, a quite different alternative.

The median-of-means estimator was proposed by Alon et al. (2002). The simple idea is to divide the data into various disjoint blocks. Within each block one calculates the standard empirical mean and takes a median value of these empirical means. The next lemma shows that for certain block size the estimator has the property required by our robust ucb strategy.

Lemma 2

Let and . Let be i.i.d. random variables with mean and centered -th moment . Let and . Let

be empirical mean estimates, each one computed on data points. Consider a median of these empirical means. Then, with probability at least ,

Proof. Let and for . According to equation (12) in the Appendix, has a Bernoulli distribution with parameter

Note that for

we have

. Thus using Hoeffding’s inequality for the tail of a binomial distribution, we get

The next performance bound is a straightforward consequence of Proposition 1 and Lemma 2. In some situations it significantly improves on Theorem 1 as the bound depends on the centered moments of order rather than on raw moments.

Theorem 3

Let and . Assume that the reward distributions satisfy

Then the regret of the Robust-ucb policy used with the median-of-means mean estimator defined in Lemma 2 satisfies

2.3 Catoni’s estimator

Finally, we consider an elegant mean estimator introduced by Catoni (2010). As we will see, this estimator has similar performance guarantees as the median-of-means estimator but with better, near optimal, numerical constants. However, we only have a good guarantee in terms of the variance. Thus, in this section we assume that the variance is finite and we do not consider the case .

Catoni’s mean estimator is defined as follows: Let be a continuous strictly increasing function satisfying

Let be such that and introduce

If be i.i.d. random variables, then Catoni’s estimator is defined as the unique value such that

Catoni (2010) proves that if and the have mean and variance at most , then, with probability at least ,

and a similar bound holds for the lower-tail. This bound has the same form as in Assumption 1, though it only holds with the additional requirement that and therefore it does not fomally fit in the framework of the robust ucb strategy as described in Section 2. However, by a simple modification, one may define a strategy that incorporates such a restriction. In Figure 2 we describe a policy based on Catoni’s mean estimator. The policy assumes that there is a known upper bound for the largest variance of any reward distribution. Then by a simple modification of the proof of Proposition 1, we obtain the following performance bound.

Modified robust UCB: For arm , define as Catoni’s mean estimate based on the first observed values of the rewards of arm . Define the index

for such that and otherwise. At time , draw an arm maximizing .

Figure 2: Modified robust ucb policy.
Theorem 4

Let . Assume that the reward distributions satisfy

Then the regret of the modified robust ucb policy satisfies

The regret bound has better numerical constants than its analogue based on the median-of-means estimator. However, a term of the order appears due to the restricted range of validity of Catoni’s estimator.

3 Discussion and conclusions

In this work we have extended the ucb algorithm to heavy-tailed stochastic multi-armed bandit problems in which the reward distributions have only moments of order for some . In this setting, we have compared three estimators for the mean reward of the arms: median-of-means, truncated mean, and Catoni’s -estimator. The median-of-means estimator gives a regret bound that depends on the central -moments of the reward distributions, without need of knowing bounds on these moments. The truncated mean estimator, instead, delivers a regret bound that depends on the raw -moments, and requires the knowledge of a bound on these moments. Finally, Catoni’s estimator depends on the central moments like the median-of-means, but it requires the knowledge of a bound on the central moments, and only works in the special case (where it gives the best leading constants on the regret). A trade-off in the choice of the estimator appears if we take into account the computational costs involved in the update of each estimator as new rewards are observed. Indeed, while the truncated mean requires constant time and space per update, the median-of-means is slightly more difficult to update, requiring space and time per update. Finally, Catoni’s -estimator requires linear space per update, which is an unfortunate feature in this sequential setting.

It is an interesting question whether there exists an estimator with the same good concentration properties as the median-of-means, but requiring only constant time and space per update. The truncated mean has good computational properties but the knowledge of raw moment bounds is required. So it is natural to ask whether we may drop this requirement for the truncated mean or some variants of it. Finally, our proof techniques heavily rely on the independence of rewards for each arm. It is unclear whether similar results could be obtained for heavy-tailed bandits with dependent reward processes.

While we focused our attention on bandit problems, the concentration results presented in this work may be naturally applied to other related sequential decision settings. Such examples include the racing algorithms of Maron and Moore (1997), and more generally nonparametric Monte Carlo estimation, see Dagum et al. (2000) and Domingo et al. (2002). These techniques are based on mean estimators, and current results are limited to the application of the empirical mean to bounded reward distributions.

Appendix A Empirical mean

In this appendix we discuss the behavior of the standard empirical mean when only moments of order are available. We focus on finite-sample guarantees (i.e., non-asymptotic results), as this is the key property to obtain finite-time results for the multi-armed bandit problem.

Let be a real i.i.d. sequence with finite mean . We assume that for some and , one has . We also denote by an upper bound on the raw moment of order , that is .

Lemma 3

Let be the empirical mean:

Then for any , with probability at least , one has

Proof. Let ,

The first term on the right-hand side can be bounded by using a union bound followed by Chebyshev’s inequality (for moments of order ):

On the other hand Chebyshev’s inequality together with the fact that give for the second term

By applying a trivial manipulation on the first term, and using Hölder’s inequality with exponents and for the second term, we obtain that the last expression above is upper bounded by

Thus we proved that

Taking entails

Note that if then the bound is trivial, and thus we always have

(12)

The proof now follows by straightforward computations.

It is easy to see that the order of magnitude of (12) is tight up to a constant factor. Indeed, let and consider the distribution . Clearly for this distribution we have , so (12) shows that for an i.i.d. sequence drawn from this distribution, one has

We can restrict our attention to the case where , for otherwise the above upper bound is trivial. Now consider . Note that we have and in particular this implies . From this last inequality and basic computations we get

which shows that (12) is tight up to a constant factor for this distribution.

Clearly, the concentration properties of the empirical mean are much weaker than for the truncated empirical mean or the median-of-means. Indeed, while the dependency on in the confidence term is similar for the three estimators, the dependency on is polynomial for the empirical mean and polylogarithmic for the truncated empirical mean and the median-of-means. As we just showed, this is not an artifact of the proof method, and the empirical mean indeed has polynomial deviations (as opposed to the exponential deviations of the two other estimators). This remark is at the basis of the theory of robust statistics and many approaches to fix the above issue have been proposed, see for example Huber (1964, 1981). The empirical mean estimator has been previously applied to heavy-tailed stochastic bandits in Liu and Zhao (2011) obtaining polynomial, rather than logarithmic, regret bounds.

References

  • Agrawal [1995] R. Agrawal. Sample mean based index policies with regret for the multi-armed bandit problem. Advances in Applied Mathematics, 27:1054–1078, 1995.
  • Alon et al. [2002] N. Alon, Y. Matias, and M. Szegedy. The space complexity of approximating the frequency moments. Journal of Computer and System Sciences, 58:137–147, 2002.
  • Auer et al. [2002] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning Journal, 47(2-3):235–256, 2002.
  • Bickel [1965] P. J. Bickel. On some robust estimates of location. Annals of Mathematical Statistics, 36:847–858, 1965.
  • Bubeck [2010] S. Bubeck. Bandits Games and Clustering Foundations. PhD thesis, Université Lille 1, 2010.
  • Bubeck and Cesa-Bianchi [2012] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Arxiv preprint arXiv:1204.5721, 2012.
  • Catoni [2010] O. Catoni. Challenging the empirical mean and empirical variance: a deviation study. Arxiv preprint arXiv:1009.2048, 2010.
  • Dagum et al. [2000] P. Dagum, R. Karp, M. Luby, and S. Ross. An optimal algorithm for monte carlo estimation. SIAM Journal on Computing, 29(5):1484–1496, 2000.
  • Domingo et al. [2002] C. Domingo, R. Gavaldà, and O. Watanabe. Adaptive sampling methods for scaling up knowledge discovery algorithms. Data Mining and Knowledge Discovery, 6(2):131–152, 2002.
  • Garivier and Cappé [2011] A. Garivier and O. Cappé. The KL-UCB algorithm for bounded stochastic bandits and beyond. In Proceedings of the 24th Annual Conference on Learning Theory (COLT), 2011.
  • Huber [1964] P. J. Huber. Robust estimation of a location parameter. Annals of Mathematical Statistics, 35:73–101, 1964.
  • Huber [1981] P. J. Huber. Robust Statistics. Wiley Interscience, 1981.
  • Lai and Robbins [1985] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4–22, 1985.
  • Liu and Zhao [2011] K. Liu and Q. Zhao. Deterministic Sequencing of Exploration and Exploitation for Multi-Armed Bandit Problems. ArXiv e-prints, 2011.
  • Maron and Moore [1997] O. Maron and A.W. Moore. The racing algorithm: Model selection for lazy learners. Artificial Intelligence Review, 11(1):193–225, 1997.
  • Robbins [1952] H. Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematics Society, 58:527–535, 1952.