Parallel Contextual Bandits in Wireless Handover Optimization

01/21/2019 ∙ by Igor Colin, et al. ∙ HUAWEI Technologies Co., Ltd. 0

As cellular networks become denser, a scalable and dynamic tuning of wireless base station parameters can only be achieved through automated optimization. Although the contextual bandit framework arises as a natural candidate for such a task, its extension to a parallel setting is not straightforward: one needs to carefully adapt existing methods to fully leverage the multi-agent structure of this problem. We propose two approaches: one derived from a deterministic UCB-like method and the other relying on Thompson sampling. Thanks to its bayesian nature, the latter is intuited to better preserve the exploration-exploitation balance in the bandit batch. This is verified on toy experiments, where Thompson sampling shows robustness to the variability of the contexts. Finally, we apply both methods on a real base station network dataset and evidence that Thompson sampling outperforms both manual tuning and contextual UCB.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The land area covered by a cellular wireless network, such as a mobile phone network, is divided into small areas called cells, each cell being covered by the antenna of a fixed base station (see Figure 1). Each base station is configured by a set of parameters that should be tuned so as to provide the best possible network coverage. Although default recommended values can be used, the best values of these parameters are likely to depend on the traffic ( the number of users) and the geographical location of the base stations. Furthermore, these parameters often need to be adjusted on a regular basis in order to adapt to the evolution of the traffic. The manual tuning of base station parameters may thus be highly time consuming, tedious and needs to preserve some level of quality of service. In addition, recent developments in cellular network standards lean towards a densification of base stations, encouraging operators to find automated solutions for optimal parameters configuration (see [1, 2, 3, 4]).

Fig. 1: Cellular network organization

One possible way of modeling this problem is through contextual bandits [5]: in this framework, one aims at optimizing an objective that is depending on contextual features (traffic and environment) while avoiding too much deterioration of the objective function (quality of service). More specifically, at each time , given side information about the current state of the wireless network (the context), the operator wants to choose the values of the parameters (an action) so as to obtain the best user experience (the reward). However, since the relation between an action and its associated reward is initially unknown, one needs to explore the space of available actions in order to gain some knowledge about this relation before being able to exploit it. This exploration-exploitation tradeoff is common to every bandit problem, from multi-armed bandits [6, 7, 8] to lazy optimization through gaussian processes [9, 10, 11, 12].

Contextual bandit problems have received a lot a attention in the past decade, either for theoretical guarantees [13, 14], delayed rewards framework [15] or pure exploration scenarios [16, 17]. Surprisingly, the multi-agent setting, such as the one induced by the base station parameter tuning, has hardly been investigated [18]. Efficiently extending existing methods to a parallel setting is not straightforward: naive implementation of an Upper Confidence Bound algorithm, for instance, may lead to suboptimal balance between exploration and exploitation when contexts are too similar. The goal of this paper is to formulate methods leveraging the multi-agent structure and to apply them to wireless handover optimization.

The paper is structured as follows. Sections II and III formally define the parallel contextual bandit problem. Section IV reviews related methods in the bandit literature. Section V develops two approaches for parallel contextual bandits. Finally, Section VI shows empirical performances of our methods, first on a toy example and then on a real wireless base station dataset.

Ii Definitions and notations

For any integer , we denote by the set and by the cardinality of any finite set . For any and any sequence , will denote the sequence up to index , that is . For any element of , will denote the euclidean norm and the -norm. For a given positive definite matrix , denotes the norm induced by the scalar product associated to , that is for any , .

Iii Problem statement

Let and let . For a given function and a parameter , we define the reward of a contextual bandit as: for any context ,

(1)

where is a

-sub-gaussian random variable independent of

, for some . The function is called the expected reward. The contextual bandits problem consists in the following. At each iteration , a set of contexts is presented; one aims at selecting the context in order to minimize the expected regret. This boils down to finding the sequence maximizing

(2)

In most settings, the context space can be further expanded as a product of a state space and an action space . In other words, at a given iteration, one observes a state and wants to find the action maximizing the expected reward:

When the expected reward is parametrized, a natural strategy is to estimate the parameter

while limiting the regret as much as possible. This type of methods has been extensively studied in the multi-armed bandit—i.e., context-free bandit—setting [6, 7, 8]. More recently, the contextual bandits setting has been investigated, whether the function is linear [13, 14], logistic [19, 20] or unknown [21, 9, 22].

One way of addressing the problem efficiently is to use an Upper Confidence Bound (UCB) framework. It is a straightforward adaptation from UCB in the multi-armed bandit (MAB) case: given a state and for each action , one uses past observations to build associated confidence region on the expected reward and chooses the action associated to the greatest possible outcome. The method is formally stated in Figure 2.

Confidence parameter
for  do
     Receive state
     for  do
         Build a confidence region:
     end for
     Select
     Observe reward
end for
Fig. 2: UCB for contextual bandit.

In this paper, we focus on the problem where contextual bandits run in parallel. We consider in addition that they share a similar function , although their parameters are not necessarily identical. The regret (2) is then replaced by the aggregated regret:

(3)

Without any additional assumption on the parameters , a straightforward strategy is to run one of the aforementioned policies independently on each bandit. However, if the parameters are selected from a restricted set, say with , then one may wonder whether it is possible to use this structure to improve the regret minimization policy. This interrogation is investigated in [23] for the multi-armed bandit setting and in [24] for the contextual bandit setting, when the arms (resp. the contexts) are pulled (resp. chosen) sequentially, one at a time. In our setting however, we are interested in finding a strategy for choosing contexts at each iteration, since the bandits run in parallel. In order to emphasize the interest of our approaches when this assumption is verified, we consider in the remainder of this paper a simpler setting, where every bandit shares the same parameter and, consequently, the same expected reward function.

Iv Related work

Surprisingly, the parallel bandit setting has not been widely studied in the contextual bandit case. There is a lot of literature about closely related settings however; we detail each of these settings in the following sections.

Iv-a Delayed reward

One way to model the parallel bandit problem is to consider bandits with delayed rewards [15]: we assume that the reward is not observed immediately after every action but rather delayed. If the rewards are received every iterations, this setting is then equivalent to contextual bandits with identical expected reward functions running in parallel. In the general online learning setting, the delayed feedback is modeled as follows. At a given iteration , the environment chooses a state and a set of admissible actions , just as in the standard setting. The reward however is only observed after a delay , possibly random, that is usually unknown in advance to the learner. One interesting result, enlightened in [15, Table 1], is the fact that the additional regret induced by the delay is additive in the stochastic feedback case and multiplicative in the adversarial setting. In our setting, the delay is in the set , where is the number of bandits. Therefore, the additional regret is proportional to the number of bandits.

The general online learning with delayed feedback problem was deeply investigated and can be extended to a wide range of applications (MDPs, distributed optimization, etc.), see [15] and references therein for further details.

The more specific problem of multi-armed bandits with delayed feedback has been extensively studied in the past decade [25, 26, 27]. The particular structure of the problem allows for different approaches than general online learning that sometimes lead to improved convergence guarantees or decreased storage costs.

Iv-B Piled rewards

Our setting offers more structure than a general contextual bandit with delayed rewards since the rewards are accumulated and then all disclosed simultaneously at a given iteration; to the best of our knowledge, this concept of “piled rewards” is only developed in [18] for the linear contextual bandit, that is a contextual bandit with a linear reward function:

The proposed algorithm is based on LinUCB, the essential difference being that the covariance matrix is updated as it cycles through the agents. The algorithm is detailed in Figure 3.

Confidence parameter
for  do
     Initialize , ,
end for
for  do
     for  do
         Select
         Update
     end for
     Observe rewards
     for  do
         Update
         Update
     end for
end for
Fig. 3: LinUCB-PR algorithm.

This update trick is shown to shrink the confidence area as the different bandits are looped over, that is does not increase as is updated, for any state . The LinUCB-PR is shown to have a

regret with probability

. This is not an improvement over LinUCB applied to the parallel setting but it behaves empirically better for large values of , possibly due to the shrinked confidence region. This lack of improvement may be explained by the overconfidence induced by the intermediate updates. As explained in [11], one should build an overconfidence measure and moderate exploratory redundancy, before deriving any regret bound.

This approach is quite similar to the ones we will detail in the next sections. We will not limit our attention to the linear setting though—the logistic setting will be of particular interest—and we will develop several methods to tackle this problem in a more general fashion.

Iv-C Gaussian processes

Lazy optimization with gaussian processes is a particular application of Bayesian optimization where one aims at finding the maximum of a possibly non-convex objective function . The idea is to use a gaussian process as a prior on and then to sequentially refine the posterior as objective values are observed. A popular method for optimizing in the gaussian process framework is GP-UCB, which is an extension of UCB to gaussian processes. Indeed, at each iteration, one queries the point presenting the highest upper confidence bound based on the posterior of the objective. This is illustrated on Figure 4.

Fig. 4:

Illustration of GP-UCB algorithm. The posterior mean of the gaussian process is the solid blue line and the associated variance is the filled area. The next evaluation candidate is the one maximizing the

optimistic prediction of .

A typical application lies in hyperparameter tuning for machine learning algorithms

[12]. The objective is the negative empirical risk (or any fitting score), which is typically expensive to evaluate. The need to parallelize gaussian process algorithms therefore arose naturally: while being expensive to evaluate, one may have access to additional computational power in order to perform several evaluations simultaneously. However, since GP-UCB selects the optimal candidate in a deterministic fashion, the extension of GP-UCB to a parallel setting is not straightforward. This issue received a lot of attention recently, and many types of approaches have been developed to tackle it. We detail three methods, as they seem to grasp the main ideas of parallelizing, but the reader may find many derivations in, [28, 29, 30, 31, 32, 33].

First, a method based on pure exploration techniques, namely GP-UCB-PE, has been proposed in [10]. The idea is to select the GP-UCB candidate and define a confidence region around that candidate. Then, the subsequent queries will be selected in order to “maximize the exploration”, that is each query will select the point in the confidence region with highest posterior variance. Since the posterior variance does not need the objective value to be updated, this ensures distinct candidates among the batch. With a batch size , resulting expected regret is improved by a factor in terms of time and is similar for large in terms of number of queries. This method is somewhat similar to the pure exploration for batched bandits proposed in [27] in the sense that a part of the batch (here agents) is dedicated to exploring as much as possible in order to guarantee an improved behavior of the remainder of the batch (here the first agent).

Another approach for parallelizing gaussian processes was introduced in [11] as GP-BUCB. As is the case for GP-UCB-PE, this method relies on the fact that posterior variance only depends on the points selected, not their associated values. The next elements are then chosen following a twofold criterion: the standard UCB criterion on the updated process and a overconfidence criterion. The purpose of the latter is to avoid the exploratory redundancy already mentioned in Section IV-B—see [11, Section 4.1] for additional details. Regret analysis of this algorithm yields bounds roughly similar to GP-UCB-PE in terms of number of queries.

Finally, the last method we focus on relies on Thompson sampling—see [34, 35]. At each iteration, the posterior gaussian process is updated. Then, functions are sampled from this distribution and the candidates are the maximizers of these sampled functions. This class of methods will be of particular interest in the next section because it seems well-suited for global regret minimization: as opposed to GP-UCB-PE and GP-BUCB, it does not necessarily involve improving the performance of only one or few agents in the batch.

V Parallel bandits with identical parameters

In this section, we develop the two main approaches for contextual bandits and see how they can be adapted in the parallel bandit setting.

V-a UCB contextual bandits

Upper Confidence Bound (UCB) approaches for contextual bandits rely on the following framework, first introduced for the linear case in [36]. At a given iteration , one is able to build a confidence region for the parameter , based on the previously selected contexts and associated rewards . A set of contexts is proposed and one then chooses the context leading to the best possible reward on :

(4)

Methods based on UCB present the advantage of being easy to implement and fast to compute. They obviously require the knowledge of a “good” region in the sense that should be as tight as possible with respect to the selected confidence level.

In the specific case of parallel bandits with identical parameters, the confidence region is based on every bandit historical contexts and rewards: , for . Then, contexts are selected, one from every , using the UCB rule. Although this method will be preferable to independent policies on each bandits, it may lack the expected exploration/exploitation balance if the contexts sets are too similar to enforce different choices amongst the bandits. Indeed, in the—extreme—setting where the contexts sets are identical, the selected contexts will be identical at every bandit: the policy will enforce either a full exploration or a full exploitation scheme, being no different from a setting with only one bandit. The potential regret improvement with respect to independent policies relies solely on the variety of the contexts. This method is formally stated in Figure 5

, update rule for
for  do
     for  do
         Select according to (4)
     end for
     Observe the rewards
     Update with observed rewards and selected contexts
end for
Fig. 5: UCB algorithm for parallel bandits.

V-B Bayesian contextual bandits

In this section, we focus on bayesian approaches for contextual bandits, and more specifically Thompson-based approaches. In such setting, one defines a prior probability

on the parameter to estimate. At iteration

, the posterior probability is then of the form

where , and is the likelihood function. For the sake of simplicity, we use the notation for the posterior . Using this relation, one may sample a parameter from the posterior probability, either using a closed form or an approximation—MCMC or Laplace approximation. Finally, the context selected from is the context maximizing the expected reward parametrized by :

(5)

Bayesian approaches may offer more flexibility when confidence bounds are not tight but are often much slower to compute, even with rough approximations.

In the specific case of parallel bandits with identical parameters, the posterior is built on every bandit historical contexts and rewards. Then, there are two ways of adapting the regular Thompson approach. First, one may sample one and select the contexts according to the rule (5). Another method is to sample parameters independently from the same posterior and then to select every context according to its respective parameter. The former is similar to the adaption of UCB methods to a parallel setting. The latter however benefits from the parallel setting, since it will enforce a better balance between exploitation and exploration at a limited cost (sampling being usually cheap when using Laplace approximation). In the setting where all contexts sets are identical, the randomness of the sampled parameters will preserve the exploitation/exploration balance, even if no reward is observed until every context is chosen. Both approaches are detailed in Figure 6 and 7.

Prior
for  do
     Sample a parameter
     for  do
         Select according to (5)
     end for
     Observe the rewards
     Update the posterior with observed rewards and selected contexts
end for
Fig. 6: Naive Thompson-based algorithm for parallel bandits.
Prior
for  do
     for  do
         Sample a parameter
         Select according to (5)
     end for
     Observe the rewards
     Update the posterior with observed rewards and selected contexts
end for
Fig. 7: Multisampling Thompson-based algorithm for parallel bandits.

Previous integrations of Thompson sampling in a parallel setting [34, 35] as well as its well-studied empirical behavior [19] suggests that it will behave better—regret-wise—than UCB in the contextual bandit setting. However, the theoretical analysis of such methods is far from trivial and is out of the scope of this paper: even MAB theoretical bounds were provided only a few years ago [37, 38, 39] and bounds for the contextual case are limited to linear payoffs [14]. Consequently, the next section is devoted to exhibiting the aforementioned differences between the two methods and then applies both algorithms to the handover parameter tuning on a real base stations dataset.

Vi Experiments

We first illustrate on a toy example the advantages, mentioned at the end of the previous section, of the multisampling Thompson-based algorithm over UCB in the case of parallel bandits. We then present the results obtained when applying UCB and Thompson sampling to the online optimization of handover parameters in a wireless cellular network. As explained in the introduction, this problem can be naturally modeled as parallel contextual bandits. In all the experiments, the implemented linear UCB algorithm is the Optimism in the Face of Uncertainty Linear bandit (OFUL) algorithm, described in [36].

Vi-a Toy example

To illustrate the benefits of the multisampling Thompson-based algorithm described in Figure 7 over the UCB algorithm described in Figure 5 we consider the toy example of a linear contextual bandit model. In this case, the expected reward is a linear function of the context and we assume that the stochastic reward is given by

(6)

where . We also consider here that the context where corresponds to the state received at each iteration and to the action that has to be chosen. This is the case for the wireless cellular network application described in the introduction and section VI-B. We take a parameter of the form , where and the true parameter

is sampled from a multivariate Gaussian distribution

and then normalized to a unit norm vector.

At each iteration, a state is sampled from a multivariate Gaussian distribution and the algorithm must choose between one of the 5 actions , . The associated reward is generated according to (6) where the variance of the noise term is set to . For both strategies, multisampling Thompson and UCB, a penalization term

is added to the linear regression.

Vi-A1 Influence of the variance of the states

We run bandits in parallel and compare the regrets obtained with the multisampling Thompson-based algorithm and the linear UCB algorithm for different variances. The regrets are computed at a time horizon for 100 random repetitions of the algorithm, the randomness coming from the strategies themselves and the generation of the states at each iteration. The results are shown in Figure 8.

Fig. 8: Cumulated regrets obtained at a time horizon when running bandits in parallel for different values of the variance of the states .

One may observe that the regret of UCB decreases as the variance increases. This is explained by the fact that when the variance of the contexts is small, UCB will choose the same action for each of the bandits that are ran in parallel. This either leads to a full exploration or a full exploitation within each batch of size . On the contrary, the multisampling Thompson-based algorithm is more robust to the change in variance and performs better than UCB. As explained in the previous section, the multisampling Thompson-based algorithm allows for a better balance between exploration and exploitation as parameters are sampled from the posterior .

Vi-A2 Influence of the number of bandits

We now study the impact of the number of bandits ran in parallel. Here, the variance of the states is set to a fixed value . As above, the regrets are computed at a time horizon for 100 random repetitions of the algorithm, the randomness coming from the strategies themselves and the generation of the contexts at each step. The results are shown in Figure 9.

Fig. 9: Cumulated regrets obtained at time horizon with a variance of the states for different values of the number of bandits ran in parallel .

As the number of bandits increases, the overall regret of UCB degrades whereas the performance of the multisampling Thompson-based algorithm remains stable. This is due to the fact that for large values of the number of bandits , the multisampling Thompson-based algorithm preserves an exploration/exploitation balance compared to UCB.

As explained in section IV-A, the parallel bandits problem can be seen as bandits with delayed rewards. For the non-contextual case, it was also found in [19] that Thompson sampling was more robust to the delay than UCB.

Vi-B Online optimization of base station parameters

The main motivation of the multisampling Thompson strategy developed in this paper is to tune base station parameters of a cellular wireless network so as to provide a good connectivity for all the users. Here we focus on parameters related to handovers. A handover occurs when the connection between a user and a cell is transferred to a neighboring cell in order to ensure the continuity of the radio network coverage and prevent interruptions of communication. The reader can refer to Chapter 2 of [40] for an account on handovers. We here only present the different steps of a handover procedure.

Vi-B1 Handovers

A handover can occur between two cells using the same frequency (intra-frequency) or between two cells using different frequencies (inter-frequency). For both types of handovers, the user triggers the handover when it receives a better signal from a neighboring cell than from the serving cell. To trigger such an event, the user needs to measure the signal received from neighboring cells. This is automatic for an intra-frequency handover. However inter-frequency measurements are only triggered when the signal received by the serving cell is lower than a pre-specified threshold. Such an event is called event A2 and is the event of interest in this paper. If the threshold is too low, this results in a late handover and a bad data rate or throughput between the cell and the user. On the contrary, if the threshold is too high, unnecessary inter-frequency measurements are triggered and this also results in a bad throughput (Figure 10). Indeed, when the user is performing inter-frequency measurements on a neighboring cell, it can no longer exchange information with the serving cell. Tuning the parameter of the base station associated to this threshold can therefore improve the throughput between a cell and a user and this should be done for each cell of a wireless network.

Fig. 10: Impact of event A2 threshold.

Optimization of handover parameters has already been studied in the wireless literature (see e.g. [4, 3, 41, 42, 43]). However most of the literature mainly focuses on the optimization of handover performance metrics such as early handovers, late handovers or ping-pong handovers whereas the focus here is on the throughput.

Vi-B2 Data

Data coming from cells have been recorded every hour during 5 days. For each hour and for each cell the value of the threshold of event A2 and five traffic data features are available. The traffic features are: downlink average number of active users, average number of users, channel quality index of cell edge users, and two features related to the traffic of small data packets. The goal is to recommend the values of the threshold of event A2 so as to achieve the best possible throughput the next hour. As we are only concerned about handovers we aim to maximize the throughput of cell edge users, i.e. users that are located at the edge of a cell and therefore have a low throughput. We thus define the quantity to optimize as the proportion of users which have a throughput lower than 5 MB/s (Megabytes per second). Obviously, the lower this proportion is the better the parameters are. With the terminology used in this paper and in the contextual bandits literature, the threshold corresponds to the action, the quantity that we want to optimize to the reward and traffic data features to the state.

We emphasize here on the fact that at each hour we should recommend a threshold value for each one of the 105 cells. Assuming that the reward of each cell is parametrized by a same parameter , this is therefore equivalent to running bandits in parallel as described in section V.

Vi-B3 Parallel logistic contextual bandits

The reward corresponding to a proportion of users, it appears natural to use a logistic regression model. We therefore recall here the logistic contextual bandit setting. For any

, the expected reward is given by where

is the sigmoid function and the reward follows a Bernouilli distribution:

with .

Given past observations , the penalized negative likelihood estimator is defined by

From a Bayesian standpoint, if (ridge penalty), the corresponding prior is Gaussian. If (elastic net penalty), the corresponding prior is a mixture of a Gaussian distribution and a Laplacian distribution.

Compared to the linear contextual bandits framework, the posterior is here intractable. A common way to draw samples from this posterior is to use the Laplace approximation (see section 4.4 in [44]): the posterior is approximated by a Gaussian distribution , with parameters:

(7)

In practice, can be expensive to compute, so it is common to only use the diagonal coefficients and this is what we do in the experiment.

The assumption that the cells are sharing the same parameter may be a bit strong. We show in the next section how to deal with different parameters , .

Vi-B4 Different parameters

The parameters of each cell are not necessarily identical in practice. However they should benefit from each other’s observations. We thus consider that for each , can be decomposed into the sum of a global parameter , shared by all the cells, and a local parameter : . Let us denote . The new penalized negative likelihood minimizer is:

where again may be a ridge penalty or an elastic net penalty:

The diagonal covariance matrix of the Laplace approximation is then:

where and the are defined by substituting the elements of in (7).

Vi-B5 Results for wireless handover optimization

The Thompsom-based algorithm is applied with the bayesian logistic regression presented above and the OFUL algorithm is applied on the logit transform

of the rewards. As the true reward function is unknown, we first fit a logistic regression model on a training data set to learn parameters which we then use as surrogates for the true parameters in order to evaluate the different strategies: multisampling Thompson-based algorithm, OFUL and the strategy used to collect the data. The cumulated expected regret for each strategy is shown in Figure 11 where one can see that Thompson sampling performs better than OFUL and the strategy used to collect the data.

Fig. 11: Multisampling Thompson-based algorithm versus OFUL for the online optimization of handover parameters. The current strategy denotes the strategy used to collect the data.

Vii Conclusion

In this paper, we exhibited two approaches for handling multi-agent scenarios in the contextual bandits framework. The first one, based on UCB, is a naive extension of the single-agent case; the second one relies on Thompson sampling in order to preserve the exploration-exploitation balance in the bandits batch. Our synthetical experiments enlightened the advantages of Thompson sampling in the parallel setting, as was suggested by theoretical and empirical studies [19]. Furthermore, application on wireless handover parameters tuning exhibited a clear superiority of Thompson sampling, in comparison of both manual tuning and UCB-like approach.

Extending this framework to a setting where contextual bandits are not identical but rather regrouped in clusters (as in [23, 24]) may be a promising way of generalizing this approach to larger networks. Also, deriving theoretical bounds for Thompson sampling in the parallel setting could lead to additional insights on how to improve existing methods.

References

  • [1] I. Siomina, P. Varbrand, and D. Yuan, “Automated optimization of service coverage and base station antenna configuration in umts networks,” IEEE Wireless Communications, vol. 13, no. 6, pp. 16–25, Dec 2006.
  • [2] A. Awada, B. Wegmann, I. Viering, and A. Klein, “Optimizing the radio network parameters of the long term evolution system using taguchi’s method,” IEEE Transactions on Vehicular Technology, vol. 60, no. 8, pp. 3825–3839, 2011.
  • [3] V. Capdevielle, A. Feki, and A. Fakhreddine, “Self-optimization of handover parameters in lte networks,” in 2013 11th International Symposium and Workshops on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt), 2013, pp. 133–139.
  • [4] I. N. M. Isa, M. Dani Baba, A. L. Yusof, and R. A. Rahman, “Handover parameter optimization for self-organizing lte networks,” in 2015 IEEE Symposium on Computer Applications Industrial Electronics (ISCAIE), 2015, pp. 1–6.
  • [5] T. Lu, D. Pál, and M. Pál, “Contextual multi-armed bandits,” in AISTATS, 2010, pp. 485–492.
  • [6] T. L. Lai and H. Robbins, “Asymptotically efficient adaptive allocation rules,” Advances in applied mathematics, vol. 6, no. 1, pp. 4–22, 1985.
  • [7] P. Auer, N. Cesa-Bianchi, and P. Fischer, “Finite-time analysis of the multiarmed bandit problem,” Machine learning, vol. 47, no. 2-3, pp. 235–256, 2002.
  • [8] S. Bubeck, G. Stoltz, C. Szepesvári, and R. Munos, “Online optimization in x-armed bandits,” in NIPS, 2009, pp. 201–208.
  • [9] N. Srinivas, A. Krause, S. M. Kakade, and M. Seeger, “Gaussian process optimization in the bandit setting: No regret and experimental design,” in ICML, 2010, pp. 1015–1022.
  • [10] E. Contal, D. Buffoni, A. Robicquet, and N. Vayatis, “Parallel gaussian process optimization with upper confidence bound and pure exploration,” in ECML-PKDD, 2013, pp. 225–240.
  • [11] T. Desautels, A. Krause, and J. W. Burdick, “Parallelizing exploration-exploitation tradeoffs in gaussian process bandit optimization,” JMLR, vol. 15, no. 1, pp. 3873–3923, 2014.
  • [12] F. Pedregosa, “Hyperparameter optimization with approximate gradient,” in ICML, 2016, pp. 737–746.
  • [13] W. Chu, L. Li, L. Reyzin, and R. Schapire, “Contextual bandits with linear payoff functions,” in AISTATS, 2011, pp. 208–214.
  • [14] S. Agrawal and N. Goyal, “Thompson sampling for contextual bandits with linear payoffs,” in ICML, 2013, pp. 127–135.
  • [15] P. Joulani, A. Gyorgy, and C. Szepesvári, “Online learning under delayed feedback,” in ICML, 2013, pp. 1453–1461.
  • [16] J.-Y. Audibert and S. Bubeck, “Best arm identification in multi-armed bandits,” in COLT, 2010.
  • [17] L. Xu, J. Honda, and M. Sugiyama, “Fully adaptive algorithm for pure exploration in linear bandits,” in AISTATS, 2018.
  • [18] K.-H. Huang and H.-T. Lin, “Linear upper confidence bound algorithm for contextual bandit problem with piled rewards,” in PAKDD, 2016, pp. 143–155.
  • [19] O. Chapelle and L. Li, “An empirical evaluation of thompson sampling,” in NIPS, 2011, pp. 2249–2257.
  • [20] O. Chapelle, E. Manavoglu, and R. Rosales, “Simple and scalable response prediction for display advertising,” ACM TIST, vol. 5, no. 4, p. 61, 2015.
  • [21] A. Agarwal, D. Hsu, S. Kale, J. Langford, L. Li, and R. Schapire, “Taming the monster: A fast and simple algorithm for contextual bandits,” in ICML, 2014, pp. 1638–1646.
  • [22] M. Valko, N. Korda, R. Munos, I. Flaounas, and N. Cristianini, “Finite-time analysis of kernelised contextual bandits,” in UAI, 2013, pp. 654–663.
  • [23] O.-A. Maillard and S. Mannor, “Latent bandits,” in ICML, 2014, pp. I–136–I–144.
  • [24] A. Gopalan, O.-A. Maillard, and M. Zaki, “Low-rank bandits with latent mixtures,” CoRR, vol. abs/1609.01508, 2016.
  • [25] S. Guha, K. Munagala, and M. Pal, “Multiarmed bandit problems with delayed feedback,” arXiv preprint arXiv:1011.1161, 2010.
  • [26] C. Vernade, O. Cappé, and V. Perchet, “Stochastic bandit models for delayed conversions,” arXiv preprint arXiv:1706.09186, 2017.
  • [27] V. Perchet, P. Rigollet, S. Chassang, E. Snowberg et al., “Batched bandit problems,” The Annals of Statistics, vol. 44, no. 2, pp. 660–681, 2016.
  • [28] E. A. Daxberger and B. K. H. Low, “Distributed batch Gaussian process optimization,” in ICML, vol. 70, 2017, pp. 951–960.
  • [29] Z. Wang, C. Li, S. Jegelka, and P. Kohli, “Batched high-dimensional bayesian optimization via structural kernel learning,” in ICML, 2017.
  • [30] J. González, Z. Dai, P. Hennig, and N. Lawrence, “Batch bayesian optimization via local penalization,” in AISTATS, 2016, pp. 648–657.
  • [31] A. Shah and Z. Ghahramani, “Parallel predictive entropy search for batch global optimization of expensive objective functions,” in NIPS, 2015, pp. 3330–3338.
  • [32] R. T. Haftka, D. Villanueva, and A. Chaudhuri, “Parallel surrogate-assisted global optimization with expensive functions – a survey,” Structural and Multidisciplinary Optimization, vol. 54, no. 1, pp. 3–13, Jul 2016.
  • [33] J. Wu and P. Frazier, “The parallel knowledge gradient method for batch bayesian optimization,” in NIPS, 2016, pp. 3126–3134.
  • [34] J. M. Hernández-Lobato, J. Requeima, E. O. Pyzer-Knapp, and A. Aspuru-Guzik, “Parallel and distributed thompson sampling for large-scale accelerated exploration of chemical space,” 2017.
  • [35] K. Kandasamy, A. Krishnamurthy, J. Schneider, and B. Poczos, “Parallelised bayesian optimisation via thompson sampling,” 2018.
  • [36] Y. Abbasi-Yadkori, D. Pál, and C. Szepesvári, “Improved algorithms for linear stochastic bandits,” in NIPS, 2011, pp. 2312–2320.
  • [37] S. Agrawal and N. Goyal, “Analysis of thompson sampling for the multi-armed bandit problem,” in COLT, 2012, pp. 39–1.
  • [38] ——, “Further optimal regret bounds for thompson sampling,” in AISTATS, 2013, pp. 99–107.
  • [39] E. Kaufmann, N. Korda, and R. Munos, “Thompson sampling: An asymptotically optimal finite-time analysis,” in ALT, 2012, pp. 199–213.
  • [40] A. Karandikar, N. Akhtar, and M. Mehta, Mobility Management in LTE Heterogeneous Networks, 1st ed.   Springer Publishing Company, Incorporated, 2017.
  • [41] P. Muñoz, R. Barco, and I. de la Bandera, “On the potential of handover parameter optimization for self-organizing networks,” IEEE Transactions on Vehicular Technology, vol. 62, no. 5, pp. 1895–1905, 2013.
  • [42] S. S. Mwanje and A. Mitschele-Thiel, “Distributed cooperative q-learning for mobility-sensitive handover optimization in lte son,” in 2014 IEEE Symposium on Computers and Communications (ISCC), vol. Workshops, 2014, pp. 1–6.
  • [43] N. Sinclair, D. Harle, I. A. Glover, J. Irvine, and R. C. Atkinson, “An advanced som algorithm applied to handover management within lte,” IEEE Transactions on Vehicular Technology, vol. 62, no. 5, pp. 1883–1894, 2013.
  • [44] C. M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics).   Berlin, Heidelberg: Springer-Verlag, 2006.