Nonstochastic Multi-Armed Bandits with Graph-Structured Feedback

09/30/2014 ∙ by Noga Alon, et al. ∙ Technion Weizmann Institute of Science Tel Aviv University Università degli Studi di Milano 0

We present and study a partial-information model of online learning, where a decision maker repeatedly chooses from a finite set of actions, and observes some subset of the associated losses. This naturally models several situations where the losses of different actions are related, and knowing the loss of one action provides information on the loss of other actions. Moreover, it generalizes and interpolates between the well studied full-information setting (where all losses are revealed) and the bandit setting (where only the loss of the action chosen by the player is revealed). We provide several algorithms addressing different variants of our setting, and provide tight regret bounds depending on combinatorial properties of the information feedback structure.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Prediction with expert advice —see, e.g., [8, 9, 15, 19, 23]— is a general abstract framework for studying sequential decision problems. For example, consider a weather forecasting problem, where each day we receive predictions from various experts, and we need to devise our forecast. At the end of the day, we observe how well each expert did, and we can use this information to improve our forecasting in the future. Our goal is that over time, our performance converges to that of the best expert in hindsight. More formally, such problems are often modeled as a repeated game between a player and an adversary, where each round, the adversary privately assigns a loss value to each action in a fixed set (in the example above, the discrepancy in the forecast if we follows a given expert’s advice). Then the player chooses an action (possibly using randomization), and incurs the corresponding loss. The goal of the player is to control regret, which is defined as the cumulative excess loss incurred by the player as compared to the best fixed action over a sequence of rounds.

In some situations, however, the player only gets partial feedback on the loss associated with each action. For example, consider a web advertising problem, where every day one can choose an ad to display to a user, out of a fixed set of ads. As in the forecasting problem, we sequentially choose actions from a given set, and may wish to control our regret with respect to the best fixed ad in hindsight. However, while we can observe whether a displayed ad was clicked on, we do not know what would have happened if we chose a different ad to display. In our abstract framework, this corresponds to the player observing the loss of the action picked, but not the losses of other actions. This well-known setting is referred to as the (non-stochastic) multi-armed bandit problem, which in this paper we denote as the bandit setting. In contrast, we refer to the previous setting, where the player observes the losses of all actions, as the expert setting. In this work, our main goal is to bridge between these two feedback settings, and create a spectrum of models in between.

Before continuing, let us first quantify the performance attainable in the expert and the bandit setting. Letting be the number of available actions, and be the number of played rounds, the best possible regret for the expert setting is of order . This optimal rate is achieved by the Hedge algorithm [15] or the Follow the Perturbed Leader algorithm [17]. In the bandit setting, the optimal regret is of order , achieved by the INF algorithm [3]. A bandit variant of Hedge, called Exp3 [4], achieves a regret with a slightly worse bound of order . Thus, switching from the full-information expert setting to the partial-information bandit setting increases the attainable regret by a multiplicative factor of , up to extra logarithmic factors. This exponential difference in terms of the dependence on can be crucial in problems with large action sets. The intuition for this difference in performance has long been that in the bandit setting, we only get of the information obtained in the expert setting (as we observe just a single loss, rather than all at each round), hence the additional -factor under the square root in the bound.

While the bandit setting received much interest, it can be criticized for not capturing additional side-information we often have on the losses of the different actions. As a motivating example, consider the problem of web advertising mentioned earlier. In the standard multi-armed bandits setting, we assume that we have no information whatsoever on whether undisplayed ads would have been clicked on. However, in many relevant cases, the semantic relationship among actions (ads) implies that we do indeed have some side-information. For instance, if two ads and are for similar vacation packages in Hawaii, and ad was displayed and clicked on by some user, it is likely that the other ad would have been clicked on as well. In contrast, if ad is for high-end running shoes, and ad is for wheelchair accessories, then a user who clicked on one ad is unlikely to click on the other. This sort of side-information is not captured by the standard bandit setting. A similar type of side-information arises in product recommendation systems hosted on online social networks, in which users can befriend each other. In this case, it has been observed that social relationships reveal similarities in tastes and interests [21]. Hence, a product liked by some user may also be liked by the user’s friends. A further example, not in the marketing domain, is route selection: We are given a graph of possible routes connecting cities. When we select a route connecting two cities, we observe the cost (say, driving time or fuel consumption) of the “edges” along that route and, in addition, we have complete information on sub-routes including any subset of the edges.222 Though this example may also be viewed as an instance of combinatorial bandits [10], the model we propose is more general. For example, it does not assume linear losses, which could arise in the routing example from the partial ordering of sub-routes.

In this paper, we present and study a setting which captures these types of side-information, and in fact interpolates between the bandit setting and the expert setting. This is done by defining a feedback system, under which choosing a given action also reveals the losses of some subset of the other actions. This feedback system can be viewed as a directed and time-changing graph over actions: an arc (directed edge) from action to action implies that when playing action at round we get information also about the loss of action at round . Thus, the expert setting is obtained by choosing a complete graph over actions (playing any action reveals all losses), and the bandit setting is obtained by choosing an empty edge set (playing an action only reveals the loss of that action). The attainable regret turns out to depend on non-trivial combinatorial properties of this graph. To describe our results, we need to make some distinctions in the setting that we consider.

Directed vs. symmetric setting.

In some situations, the side-information between two actions is symmetric —for example, if we know that both actions will have a similar loss. In that case, we can model our feedback system

as an undirected graph. In contrast, there are situations where the side-information is not symmetric. For example, consider the side-information gained from asymmetric social links, such as followers of celebrities. In such cases, followers might be more likely to shape their preferences after the person they follow, than the other way around. Hence, a product liked by a celebrity is probably also liked by his/her followers, whereas a preference expressed by a follower is more often specific to that person. Another example in the context of ad placement is when a person buying a video game console might also buy a high-def cable to connect it to the TV set. Vice versa, interest in high-def cables need not indicate an interest in game consoles. In such situations, modeling the feedback system via a directed graph

is more suitable. Note that the symmetric setting is a special case of the directed setting, and therefore handling the symmetric case is easier than the directed case.

Informed vs. uninformed setting.

In some cases, the feedback system is known to the player before each round, and can be utilized for choosing actions. For example, we may know beforehand which pairs of ads are related, or we may know the users who are friends of another user. We denote this setting as the informed setting. In contrast, there might be cases where the player does not have full knowledge of the feedback system before choosing an action, and we denote this harder setting as the uninformed setting. For example, consider a firm recommending products to users of an online social network. If the network is owned by a third party, and therefore not fully visible, the system may still be able to run its recommendation policy by only accessing small portions of the social graph around each chosen action (i.e., around each user to whom a recommendation is sent).

Generally speaking, our contribution lies in both characherizing the regret bounds that can be achieved in the above settings as a function of combinatorial properties of the feedback systems, as well as providing efficient sequential decision algorithms working in those settings. More specifically, our contributions can be summarized as follows (see Section 2 for a brief review of the relevant combinatorial properties of graphs).

Uninformed setting.

We present an algorithm (Exp3-SET) that achieves regret in expectation, where is the size of the maximal acyclic graph in . In the symmetric setting, ( is the independence number of ), and we prove that the resulting regret bound is optimal up to logarithmic factors, when is fixed for all rounds. Moreover, we show that Exp3-SET attains regret when the feedback graphs are random graphs generated from a standard Erdős-Renyi model.

Informed setting.

We present an algorithm (Exp3-DOM) that achieves expected regret of , for both the symmetric and directed cases. Since our lower bound also applies to the informed setting, this characterizes the attainable regret in the informed setting, up to logarithmic factors. Moreover, we present another algorithm (ELP.P), that achieves regret with probability at least over the algorithm’s internal randomness. Such a high-probability guarantee is stronger than the guarantee for Exp3-DOM, which holds just in expectation, and turns out to be of the same order in the symmetric case. However, in the directed case, the regret bound may be weaker since may be larger than

. Moreover, ELP.P requires us to solve a linear program at each round, whereas Exp3-DOM only requires finding an approximately minimal dominating set, which can be done by a standard greedy set cover algorithm.

Our results interpolate between the bandit and expert settings: When is a full graph for all (which means that the player always gets to see all losses, as in the expert setting), then , and we recover the standard guarantees for the expert setting: up to logarithmic factors. In contrast, when is the empty graph for all (which means that the player only observes the loss of the action played, as in the bandit setting), then , and we recover the standard guarantees for the bandit setting, up to logarithmic factors. In between are regret bounds scaling like , where lies between and , depending on the graph structure (again, up to log-factors).

Our results are based on the algorithmic framework for handling the standard bandit setting introduced in [4]

. In this framework, the full-information Hedge algorithm is combined with unbiased estimates of the full loss vectors in each round. The key challenge is designing an appropriate randomized scheme for choosing actions, which correctly balances exploration and exploitation or, more specifically, ensures small regret while simultaneously controlling the variance of the loss estimates. In our setting, this variance is subtly intertwined with the structure of the feedback system. For example, a key quantity emerging in the analysis of Exp3-DOM can be upper bounded in terms of the independence number of the graphs. This bound (Lemma 

16 in the appendix) is based on a combinatorial construction which may be of independent interest.

For the uninformed setting, our work was recently improved by [18], whose main contribution is an algorithm attaining expected regret in the uninformed and directed setting using a novel implicit exploration idea. Up to log factors, this matches the performance of our Exp3-DOM and ELP.P algorithms, without requiring prior knowledge of the feedback system. On the other hand, their bound holds only in expectation rather than with high probability.

Paper Organization:

In the next section, we formally define our learning protocols, introduce our main notation, and recall the combinatorial properties of graphs that we require. In Section 3, we tackle the uninformed setting, by introducing Exp3-SET, with upper and lower bounds on regret based on both the size of the maximal acyclic subgraph (general directed case) and the independence number (symmetric case). In Section 4, we handle the informed setting through the two algorithms Exp3-DOM (Section 4.1) on which we prove regret bounds in expectation, and ELP.P (Section 4.2) whose bounds hold in the more demanding high probability regime. We conclude the main text with Section 5, where we discuss open questions, and possible directions for future research. All technical proofs are provided in the appendices. We organized such proofs based on which section of the main text the corresponding theoretical claims occur.

2 Learning protocol, notation, and preliminaries

As stated in the introduction, we consider adversarial decision problems with a finite action set . At each time , a player (the “learning algorithm”) picks some action and incurs a bounded loss . Unlike the adversarial bandit problem [4, 9], where only the played action reveals its loss , here we assume all the losses in a subset of actions are revealed after is played. More formally, the player observes the pairs for each . We also assume for any and , that is, any action reveals its own loss when played. Note that the bandit setting () and the expert setting () are both special cases of this framework. We call the feedback set of action at time , and write when at time playing action also reveals the loss of action . (We sometimes write when time plays no role in the surrounding context.) With this notation, . The family of feedback sets we collectively call the feedback system at time .

The adversaries we consider are nonoblivious. Namely, each loss and feedback set at time can be arbitrary functions of the past player’s actions (note, though, that the regret is measured with respect to a fixed action assuming the adversary would have chosen the same losses, so our results do not extend to truly adaptive adversaries in the sense of [13]). The performance of a player is measured through the expected regret

where and are the cumulative losses of the player and of action , respectively.333 Although we defined the problem in terms of losses, our analysis can be applied to the case when actions return rewards via the transformation . The expectation is taken with respect to the player’s internal randomization (since losses are allowed to depend on the player’s past random actions, may also be random). In Section 3 we also consider a variant in which the feedback system is randomly generated according to a specific stochastic model. For simplicity, we focus on a finite horizon setting, where the number of rounds is known in advance. This can be easily relaxed using a standard doubling trick.

We also consider the harder setting where the goal is to bound the actual regret

with high probability with respect to the player’s internal randomization, and where the regret bound depends logarithmically on . Clearly, a high probability bound on the actual regret implies a similar bound on the expected regret.

Whereas some of our algorithms need to know the feedback system at the beginning of each step , others need it only at the end of each step. We thus consider two online learning settings: the informed setting, where the full feedback system selected by the adversary is made available to the learner before making the choice ; and the uninformed setting, where no information whatsoever regarding the time- feedback system is given to the learner prior to prediction, but only following the prediction and with the associated loss information.

We find it convenient at this point to adopt a graph-theoretic interpretation of feedback systems. At each step , the feedback system defines a directed graph , the feedback graph, where is the set of actions and

is the set of arcs (i.e., ordered pairs of nodes). For

, the arc belongs to if and only if (the self-loops created by are intentionally ignored). Hence, we can equivalently define in terms of . Observe that the outdegree of any equals . Similarly, the indegree of is the number of actions such that (i.e., such that ). A notable special case of the above is when the feedback system is symmetric: if and only if for all and . In words, playing at time reveals the loss of if and only if playing at time reveals the loss of . A symmetric feedback system defines an undirected graph or, more precisely, a directed graph having, for every pair of nodes , either no arcs or length-two directed cycles. Thus, from the point of view of the symmetry of the feedback system, we also distinguish between the directed case ( is a general directed graph) and the symmetric case ( is an undirected graph for all ).

The analysis of our algorithms depends on certain properties of the sequence of graphs . Two graph-theoretic notions playing an important role here are those of independent sets and dominating sets. Given an undirected graph , an independent set of is any subset such that no two are connected by an edge in , i.e., . An independent set is maximal if no proper superset thereof is itself an independent set. The size of any largest (and thus maximal) independent set is the independence number of , denoted by . If is directed, we can still associate with it an independence number: we simply view as undirected by ignoring arc orientation. If is a directed graph, then a subset is a dominating set for if for all there exists some such that . In our bandit setting, a time- dominating set is a subset of actions with the property that the loss of any remaining action in round can be observed by playing some action in . A dominating set is minimal if no proper subset thereof is itself a dominating set. The domination number of directed graph , denoted by , is the size of any smallest (and therefore minimal) dominating set for ; see Figure 1 for examples.

Figure 1: An example for some graph-theoretic concepts. Top Left: A feedback system with actions (self-loops omitted). The light blue action reveals its loss , as well as the losses of the other four actions it points to. Top Right: The light blue nodes are a minimal dominating set for the same graph. The rightmost action is included in any dominating set, since no other action is dominating it. Bottom Left: A symmetric feedback system where the light blue nodes are a maximal independent set. This is the same graph as before, but edge orientation has been removed. Bottom Right: The light blue nodes are a maximum acyclic subgraph of the depicted -action graph.

Computing a minimum dominating set for an arbitrary directed graph is equivalent to solving a minimum set cover problem on the associated feedback system . Although minimum set cover is NP-hard, the well-known Greedy Set Cover algorithm [12], which repeatedly selects from the set containing the largest number of uncovered elements so far, computes a dominating set such that .

We can also lift the notion of independence number of an undirected graph to directed graphs through the notion of maximum acyclic subgraphs. Given a directed graph , an acyclic subgraph of is any graph such that , and , with no (directed) cycles. We denote by the maximum size of such . Note that when is undirected (more precisely, as above, when is a directed graph having for every pair of nodes either no arcs or length-two cycles), then , otherwise . In particular, when is itself a directed acyclic graph, then . See Figure 1 (bottom right) for a simple example. Finally, we let denote the indicator function of event .

3 The uninformed setting

In this section we investigate the setting in which the learner must select an action without any knowledge of the current feedback system. We introduce a simple general algorithm, Exp3-SET (Algorithm LABEL:a:lossalg), that works in both the directed and symmetric cases. In the symmetric case, we show that the regret bound achieved by the algorithm is optimal to within logarithmic factors.

When the feedback graph is a fixed clique or a fixed edgeless graph, Exp3-SET reduces to the Hedge algorithm or, respectively, to the Exp3 algorithm. Correspondingly, the regret bound for Exp3-SET yields the regret bound of Hedge and that of Exp3 as special cases.

algocf[t]     Similar to Exp3, Exp3-SET uses importance sampling loss estimates that divide each observed loss by the probability of observing it. This probability is the probability of observing the loss of action at time , i.e., it is simply the sum of all (the probability of selecting action at time ) such that (recall that this sum always includes ).

In the expert setting, we have for all and , and we recover the Hedge algorithm. In the bandit setting, for all and , and we recover the Exp3 algorithm (more precisely, we recover the variant Exp3Light of Exp3 that does not have an explicit exploration term, see [11] and also [22, Theorem 2.7]).

In what follows, we show that the regret of Exp3-SET can be bounded in terms of the key quantity

(1)

Each term can be viewed as the probability of drawing from conditioned on the event that was observed. A key aspect to our analysis is the ability to deterministically and non-vacuously444An obvious upper bound on is , since . upper bound in terms of certain quantities defined on . We do so in two ways, either irrespective of how small each may be (this section) or depending on suitable lower bounds on the probabilities (Section 4). In fact, forcing lower bounds on is equivalent to adding exploration terms to the algorithm, which can be done only when is known before each prediction (i.e., in the informed setting).

The following result, whose proof is in Appendix A.2, is the building block for all subsequent results in the uninformed setting.

Lemma 1

The regret of Exp3-SET satisfies

(2)

In the expert setting, for all and implies deterministically for all . Hence, the right-hand side of (2) becomes corresponding to the Hedge bound with a slightly larger constant in the second term; see, e.g., [9, Page 72]. In the bandit setting, for all and implies deterministically for all . Hence, the right-hand side of (2) takes the form equivalent to the Exp3 bound; see, e.g., [5, Equation 3.4].

We now move on to the case of general feedback systems, for which we can prove the following result (proof is in Appendix A.3).

Theorem 2

The regret of Exp3-SET satisfies

If for , then setting gives

As we pointed out in Section 2, , with equality holding when is an undirected graph. Hence, in the special case when is symmetric, we obtain the following result.

Corollary 3

In the symmetric case, the regret of Exp3-SET satisfies

If for , then setting gives

Note that both Theorem 2 and Corollary 3 require the algorithm to know upper bounds on and , which may be computationally non-trivial – we return and expand on this issue in section 4.2.

In light of Corollary 3, one may wonder whether Lemma 1 is powerful enough to allow a control of regret in terms of the independence number even in the directed case. Unfortunately, the next result shows that —in the directed case— cannot be controlled unless specific properties of are assumed. More precisely, we show that even for simple directed graphs, there exist distributions on the vertices such that is linear in the number of nodes while the independence number555 In this specific example, the maximum acyclic subgraph has size , which confirms the looseness of Theorem 2. is .

Fact 4

Let be a total order on , i.e., such that for all , arc for all . Let be a distribution on such that , for and . Then

Next, we discuss lower bounds on the achievable regret for arbitrary algorithms. The following theorem provides a lower bound on the regret in terms of the independence number , for a constant graph (which may be directed or undirected).

Theorem 5

Suppose for all with . There exist two constants such that whenever , then for any algorithm there exists an adversarial strategy for which the expected regret of the algorithm is at least .

The intuition of the proof (provided in Appendix A.4) is the following: if the graph has non-adjacent vertices, then an adversary can make this problem as hard as a standard bandit problem, played on actions. Since for bandits on actions there is a lower bound on the expected regret, a variant of the proof technique leads to a lower bound in our case.

One may wonder whether a sharper lower bound exists which applies to the general directed adversarial setting and involves the larger quantity . Unfortunately, the above measure does not seem to be related to the optimal regret: using Lemma 11 in Appendix A.5 (see proof of Theorem 6 below) one can exhibit a sequence of graphs each having a large acyclic subgraph, on which the regret of Exp3-SET is still small.

Random feedback systems.

We close this section with a study of Lemma 1 in a setting where the feedback system is stochastically generated via the Erdős-Renyi model. This is a standard model for random directed graphs , where we are given a density parameter and, for any pair , arc with independent probability (self loops, i.e., arcs are included by default here). We have the following result.

Theorem 6

For , let be an independent draw from the Erdős-Renyi model with fixed parameter . Then the regret of Exp3-SET satisfies

In the above, expectations are computed with respect to both the algorithm’s randomization and the random generation of occurring at each round. In particular, setting , gives

Note that as ranges in we interpolate between the multi-arm bandit666 Observe that . () and the expert () regret bounds.

Finally, note that standard results from the theory of Erdős-Renyi graphs —at least in the symmetric case (see, e.g., [16])— show that when the density parameter is constant, the independence number of the resulting graph has an inverse dependence on . This fact, combined with the lower bound above, gives a lower bound of the form , matching (up to logarithmic factors) the upper bound of Theorem 6.

4 The informed setting

The lack of a lower bound matching the upper bound provided by Theorem 2 is a good indication that something more sophisticated has to be done in order to upper bound the key quantity defined in (1). This leads us to consider more refined ways of allocating probabilities to nodes. We do so by taking advantage of the informed setting, in which the learner can access before selecting the action . The algorithm Exp3-DOM, introduced in this section, exploits the knowledge of in order to achieve an optimal (up to logarithmic factors) regret bound.

Recall the problem uncovered by Fact 4: when the graph induced by the feedback system is directed, cannot be upper bounded, in a non-vacuous way, independent of the choice of probabilities . The new algorithm Exp3-DOM controls these probabilities by adding an exploration term to the distribution . This exploration term is supported on a dominating set of the current graph , and computing such a dominating set before selection of the action at time can only be done in the informed setting. Intuitively, exploration on a dominating set allows to control by increasing the probability that each action is observed. If the dominating set is also minimal, then the variance caused by exploration can be bounded in terms of the independence number (and additional logarithmic factors) just like the undirected case.

Yet another reason why we may need to know the feedback system beforehand is when proving high probability results on the regret. In this case, operating with a feedback term for the probabilities seems unavoidable. In Section 4.2 we present another algorithm, called ELP.P, which can deliver regret bounds that hold with high probability over its internal randomization.

algocf[t]    

4.1 Bounds in expectation: the Exp3-DOM algorithm

The Exp3-DOM algorithm (see Algorithm LABEL:a:exp3dom) for the informed setting runs variants of Exp3 (with explicit exploration) indexed by . At time the algorithm is given the current feedback system , and computes a dominating set of the directed graph induced by . Based on the size of , the algorithm uses instance to draw action . We use a superscript to denote the quantities relevant to the variant of Exp3 indexed by . Similarly to the analysis of Exp3-SET, the key quantities are

Let . Clearly, the sets are a partition of the time steps , so that . Since the adversary adaptively chooses the dominating sets (through the adaptive choice of the feedback system at time ), the sets

are random variables. This causes a problem in tuning the parameters

. For this reason, we do not prove a regret bound directly for Exp3-DOM, where each instance uses a fixed , but for a slight variant of it (described in the proof of Lemma 7 — see Appendix B.1), where each is set through a doubling trick.

Lemma 7

In the directed case, the regret of Exp3-DOM satisfies

(3)

Moreover, if we use a doubling trick to choose for each , then

(4)

Importantly, the next result (proof in Appendix B.2) shows how bound (4) of Lemma 7 can be expressed in terms of the sequence of independence numbers of graphs whenever the Greedy Set Cover algorithm [12] (see Section 2) is used to compute the dominating set of the feedback system at time .

Theorem 8

If Step 2 of Exp3-DOM uses the Greedy Set Cover algorithm to compute the dominating sets , then the regret of Exp-DOM using the doubling trick satisfies

Combining the upper bound of Theorem 8 with the lower bound of Theorem 5, we see that the attainable expected regret in the informed setting is characterized by the independence numbers of the graphs. Moreover, a quick comparison between Corollary 3 and Theorem 8 reveals that a symmetric feedback system overcomes the advantage of working in an informed setting: The bound we obtained for the uninformed symmetric setting (Corollary 3) is sharper by logarithmic factors than the one we derived for the informed — but more general, i.e., directed — setting (Theorem 8).

4.2 High probability bounds: the ELP.P algorithm

We now turn to present an algorithm working in the informed setting for which we can also prove high-probability regret bounds.777 We have been unable to prove high-probability bounds for Exp3-DOM or variants of it. We call this algorithm ELP.P (which stands for “Exponentially-weighted algorithm with Linear Programming”, with high Probability). Like Exp3-DOM, the exploration component is not uniform over the actions, but is chosen carefully to reflect the graph structure at each round. In fact, the optimal choice of the exploration for ELP.P requires us to solve a simple linear program, hence the name of the algorithm.888 We note that this algorithm improves over the basic ELP algorithm initially presented in [20], in that its regret is bounded in high probability and not just in expectation, and applies in the directed case as well as the symmetric case. The pseudo-code appears as Algorithm LABEL:alg:bandits. Note that unlike the previous algorithms, this algorithm utilizes the “rewards” formulation of the problem, i.e., instead of using the losses directly, it uses the rewards , and boosts the weight of actions for which is estimated to be large, as opposed to decreasing the weight of actions for which is estimated to be large. This is done merely for technical convenience, and does not affect the complexity of the algorithm nor the regret guarantee.

algocf[t]    

Theorem 9

Let algorithm ELP.P run with learning rate sufficiently small such that . Then, with probability at least we have

where the notation hides only numerical constants and factors logarithmic in and . In particular, if for constants we have , , and we pick such that

then we get the bound

This theorem essentially tells us that the regret of the ELP.P algorithm, up to second-order factors, is quantified by . Recall that, in the special case when is symmetric, we have .

One computational issue to bear in mind is that this theorem (as well as Theorem 2 and Corollary 3) holds under an optimal choice of . In turn, this value depends on upper bounds on (or on , in the symmetric case). Unfortunately, in the worst case, computing the maximal acyclic subgraph or the independence number of a given graph is NP-hard, so implementing such algorithms is not always computationally tractable.999 [20] proposed a generic mechanism to circumvent this, but the justification has a flaw which is not clear how to fix. However, it is easy to see that the algorithm is robust to approximate computation of this value —misspecifying the average independence number by a factor of entails an additional

factor in the bound. Thus, one might use standard heuristics resulting in a reasonable approximation of the independence number. Although computing the independence number is also NP-hard to approximate, it is unlikely for intricate graphs with hard-to-approximate independence numbers to appear in relevant applications. Moreover, by setting the approximation to be either

or , we trivially obtain an approximation factor of at most either or . The former leads to a regret bound similar to the standard bandits setting, while the latter leads to a regret bound, which is better than the regret for the bandits setting if the average independence number is less than . In contrast, this computational issue does not show up in Exp3-DOM, whose tuning relies only on efficiently-computable quantities.

5 Conclusions and Open Questions

In this paper we investigated online prediction problems in partial information regimes that interpolate between the classical bandit and expert settings. We provided algorithms, as well as upper and lower bounds on the attainable regret, with a non-trivial dependence on the information feedback structure. In particular, we have shown a number of results characterizing prediction performance in terms of: the structure of the feedback system, the amount of information available before prediction, and the nature (adversarial or fully random) of the process generating the feedback system.

There are many open questions that warrant further study, some of which are briefly mentioned below:

  1. It would be interesting to study adaptations of our results to the case when the feedback system may depend on the loss of player’s action . Note that this would prevent a direct construction of an unbiased estimator for unobserved losses, which many worst-case bandit algorithms (including ours —see the appendix) hinge upon.

  2. The upper bound contained in Theorem 2, expressed in terms of , is almost certainly suboptimal, even in the uninformed setting, and it would be nice to see if more adequate graph complexity measures can be used instead.

  3. Our lower bound in Theorem  5 refers to a constant graph sequence. We would like to provide a more complete characterization applying to sequences of adversarially-generated graphs in terms of sequences of their corresponding independence numbers (or variants thereof), in both the uninformed and the informed settings. Moreover, the adversary strategy achieving our lower bound is computationally hard to implement in the worst case (the adversary needs to identify the largest independent set in a given graph). What is the achievable regret if the adversary is assumed to be computationally bounded?

  4. The information feedback models we used are natural and simple. They assume that the action at a give time period only affects rewards and observations for that period. In some settings, the reward observation may be delayed. In such settings, the action taken at a given stage may affect what is observed in subsequent stages. We leave the issue of modelling and analyzing such setting to future work.

  5. Finally, we would like to see what is the achievable performance in the special case of stochastic rewards, which are assumed to be drawn i.i.d. from some unknown distributions. This was recently considered in [7], with results depending on the graph clique structure. However, the tightness of these results remains to be ascertained.

Acknowledgments

NA was supported in part by a USA-Israeli BSF grant, by an ISF grant, by the Israeli I-Core program and by the Oswald Veblen Fund. NCB acknowledges partial support by MIUR (project ARS TechnoMedia, PRIN 2010-2011, grant no. 2010N5K7EB_003). SM was supported in part by the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement 306638 (SUPREL). YM was supported in part by a grant from the Israel Science Foundation, a grant from the United States-Israel Binational Science Foundation (BSF), a grant by Israel Ministry of Science and Technology and the Israeli Centers of Research Excellence (I-CORE) program (Center No. 4/11). OS was supported in part by a grant from the Israel Science Foundation (No. 425/13) and a Marie-Curie Career Integration Grant.

References

  • [1] N. Alon, N. Cesa-Bianchi, C. Gentile, and Y. Mansour. From bandits to experts: A tale of domination and independence. In NIPS, 2013.
  • [2] N. Alon and J. H. Spencer. The probabilistic method. John Wiley & Sons, 2004.
  • [3] Jean-Yves Audibert and Sébastien Bubeck. Minimax policies for adversarial and stochastic bandits. In COLT, 2009.
  • [4] Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002.
  • [5] Sébastien Bubeck and Nicolò Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems.

    Foundations and Trends in Machine Learning

    , 5(1):1–122, 2012.
  • [6] Y. Caro. New results on the independence number. In Tech. Report, Tel-Aviv University, 1979.
  • [7] Stéphane Caron, Branislav Kveton, Marc Lelarge, and Smriti Bhagat. Leveraging side observations in stochastic bandits. In UAI, 2012.
  • [8] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P. Helmbold, R. E. Schapire, and M. K. Warmuth. How to use expert advice. J. ACM, 44(3):427–485, 1997.
  • [9] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006.
  • [10] Nicolò Cesa-Bianchi and Gábor Lugosi. Combinatorial bandits. J. Comput. Syst. Sci., 78(5):1404–1422, 2012.
  • [11] Nicolò Cesa-Bianchi, Yishay Mansour, and Gilles Stoltz. Improved second-order bounds for prediction with expert advice. In Proceedings of the 18th Annual Conference on Learning Theory, pages 217–232, 2005.
  • [12] V. Chvatal. A greedy heuristic for the set-covering problem. Mathematics of Operations Research, 4(3):233–235, 1979.
  • [13] Ofer Dekel, Ambuj Tewari, and Raman Arora. Online bandit learning against an adaptive adversary: from regret to policy regret. In ICML, 2012.
  • [14] D.A. Freedman. On tail probabilities for martingales. Annals of Probability, 3:100–118, 1975.
  • [15] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In Euro-COLT, pages 23–37. Springer-Verlag, 1995. Also, JCSS 55(1): 119-139 (1997).
  • [16] A. M. Frieze. On the independence number of random graphs. Discrete Mathematics, 81:171–175, 1990.
  • [17] A. Kalai and S. Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71:291–307, 2005.
  • [18] T. Kocàk, G. Neu, M. Valko, and R. Munos. Efficient learning by implicit exploration in bandit problems with side observations. Manuscript, 2014.
  • [19] Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. Information and Computation, 108:212–261, 1994.
  • [20] S. Mannor and O. Shamir. From bandits to experts: On the value of side-observations. In 25th Annual Conference on Neural Information Processing Systems (NIPS 2011), 2011.
  • [21] Alan Said, Ernesto W De Luca, and Sahin Albayrak. How social relationships affect user similarities. In Proceedings of the International Conference on Intelligent User Interfaces Workshop on Social Recommender Systems, Hong Kong, 2010.
  • [22] Gilles Stoltz. Information Incomplète et Regret Interne en Prédiction de Suites Individuelles. PhD thesis, Université Paris-XI Orsay, 2005.
  • [23] V. G. Vovk. Aggregating strategies. In COLT, pages 371–386, 1990.
  • [24] V. K. Wey. A lower bound on the stability number of a simple graph. In Bell Lab. Tech. Memo No. 81-11217-9, 1981.

Appendix A Technical lemmas and proofs from Section 3

This section contains the proofs of all technical results occurring in Section 3, along with ancillary graph-theoretic lemmas. Throughout this appendix, is a shorthand for . Also, for ease of exposition, we implicitly first condition on the history, i.e., , and later take an expectation with respect to that history. This implies that, given that conditioning, we can treat random variables such as as constants, and we can later take an expectation over history so as to remove the conditioning.

a.1 Proof of Fact 4

Using standard properties of geometric sums, one can immediately see that

hence the claimed result.

a.2 Proof of Lemma 1

Following the proof of Exp3 [4], we have

Taking logs, using for all , and summing over yields

Moreover, for any fixed comparison action , we also have

Putting together and rearranging gives

(5)

Note that, for all ,

Moreover,

Hence, taking expectations on both sides of (5), and recalling the definition of , we can write

(6)

Finally, taking expectations over history to remove conditioning gives

as claimed.

a.3 Proof of Theorem 2

We first need the following lemma.

Lemma 10

Let be a directed graph with vertex set , and arc set . Then, for any distribution over we have,

Proof. We show that there is a subset of vertices such that the induced graph is acyclic and . Let be the in-neighborhood of node , i.e., the set of nodes such that .

We prove the lemma by adding elements to an initially empty set . Let

and let be the vertex which minimizes over . We now delete from the graph, along with all its incoming neighbors (set ), and all edges which are incident (both departing and incoming) to these nodes, and then iterating on the remaining graph. Let be the in-neighborhoods of the graph after the first step. The contribution of all the deleted vertices to is

where the inequality follows from the minimality of .

Let , and . Then, from the first step we have

We apply the very same argument to with node (minimizing over ), to with node , …, to with node , up until , i.e., until no nodes are left in the reduced graph. This gives , where . Moreover, since in each step we remove all remaining arcs incoming to , the graph induced by set cannot contain cycles.

The claim of Theorem 2 follows from a direct combination of Lemma 1 with Lemma 10.

a.4 Proof of Theorem 5

The proof uses a variant of the standard multi-armed bandit lower bound [9]. The intuition is that when we have non-adjacent nodes, the problem reduces to an instance of the standard multi-armed bandit (where information beyond the loss of the action choses is observed) on actions.

By Yao’s minimax principle, in order to establish the lower bound, it is enough to demonstrate some probabilistic adversary strategy, on which the expected regret of any deterministic algorithm is bounded from below by for some constant .

Specifically, suppose without loss of generality that we number the nfiodes in some largest independent set of by , and all the other nodes in the graph by . Let

be a parameter to be determined later, and consider the following joint distribution over stochastic loss sequences:

  • Let

    be uniformly distributed on

    ;

  • Conditioned on , each loss is independent Bernoulli with parameter if and , independent Bernoulli with parameter if , and is with probability , otherwise.

For each , let be the number of times the node was chosen by the algorithm after rounds. Also, let denote the number of times some node whose index is larger than is chosen after rounds. Finally, let denote expectation conditioned on , and denote the probability over loss sequences conditioned on . We have