# Distributed Non-Stochastic Experts

We consider the online distributed non-stochastic experts problem, where the distributed system consists of one coordinator node that is connected to k sites, and the sites are required to communicate with each other via the coordinator. At each time-step t, one of the k site nodes has to pick an expert from the set 1, ..., n, and the same site receives information about payoffs of all experts for that round. The goal of the distributed system is to minimize regret at time horizon T, while simultaneously keeping communication to a minimum. The two extreme solutions to this problem are: (i) Full communication: This essentially simulates the non-distributed setting to obtain the optimal O(√((n)T)) regret bound at the cost of T communication. (ii) No communication: Each site runs an independent copy : the regret is O(√(log(n)kT)) and the communication is 0. This paper shows the difficulty of simultaneously achieving regret asymptotically better than √(kT) and communication better than T. We give a novel algorithm that for an oblivious adversary achieves a non-trivial trade-off: regret O(√(k^5(1+ϵ)/6 T)) and communication O(T/k^ϵ), for any value of ϵ∈ (0, 1/5). We also consider a variant of the model, where the coordinator picks the expert. In this model, we show that the label-efficient forecaster of Cesa-Bianchi et al. (2005) already gives us strategy that is near optimal in regret vs communication trade-off.

## Authors

• 21 publications
• 16 publications
• 2 publications
• ### Optimal anytime regret with two experts

The multiplicative weights method is an algorithm for the problem of pre...
02/20/2020 ∙ by Nicholas J. A. Harvey, et al. ∙ 0

• ### Efficient tracking of a growing number of experts

We consider a variation on the problem of prediction with expert advice,...

• ### Adversarial Online Learning with Changing Action Sets: Efficient Algorithms with Approximate Regret Bounds

We revisit the problem of online learning with sleeping experts/bandits:...
03/07/2020 ∙ by Ehsan Emamjomeh-Zadeh, et al. ∙ 0

• ### Distributed Online Learning for Joint Regret with Communication Constraints

In this paper we consider a distributed online learning setting for jo...
02/15/2021 ∙ by Dirk van der Hoeven, et al. ∙ 0

• ### Soft-Bayes: Prod for Mixtures of Experts with Log-Loss

We consider prediction with expert advice under the log-loss with the go...
01/08/2019 ∙ by Laurent Orseau, et al. ∙ 12

• ### Differentially Private Online Submodular Maximization

In this work we consider the problem of online submodular maximization u...
10/24/2020 ∙ by Sebastian Perez-Salazar, et al. ∙ 0

• ### Adapting to Non-stationarity with Growing Expert Ensembles

When dealing with time series with complex non-stationarities, low retro...
03/04/2011 ∙ by Cosma Rohilla Shalizi, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In this paper, we consider the well-studied non-stochastic expert problem in a distributed setting. In the standard (non-distributed) setting, there are a total of experts available for the decision-maker to consult, and at each round , she must choose to follow the advice of one of the experts, say , from the set

. At the end of the round, she observes a payoff vector

, where denotes the payoff that would have been received by following the advice of expert . The payoff received by the decision-maker is . In the non-stochastic setting, an adversary decides the payoff vectors at any time step. At the end of the rounds, the regret of the decision maker is the difference in the payoff that she would have received using the single best expert at all times in hindsight, and the payoff that she actually received, i.e. . The goal here is to minimize her regret; this general problem in the non-stochastic setting captures several applications of interest, such as experiment design, online ad-selection, portfolio optimization, etc. (See [1, 2, 3, 4, 5] and references therein.)

Tight bounds on regret for the non-stochastic expert problem are obtained by the so-called follow the regularized leader approaches; at time , the decision-maker chooses a distribution, , over the experts. Here minimizes the quantity , where is a regularizer. Common regularizers are the entropy function, which results in Hedge [1] or the exponentially weighted forecaster (see chap. 2 in [2]), or as we consider in this paper , where is a random vector, which gives the follow the perturbed leader () algorithm [6].

We consider the setting when the decision maker is a distributed system, where several different nodes may select experts and/or observe payoffs at different time-steps. Such settings are common, e.g. internet search companies, such as Google or Bing, may use several nodes to answer search queries and the performance is revealed by user clicks. From the point of view of making better predictions, it is useful to pool all available data. However, this may involve significant communication which may be quite costly. Thus, there is an obvious trade-off between cost of communication and cost of inaccuracy (because of not pooling together all data), which leads to the question:

What is the explicit trade-off between the total amount of communication needed and the regret of the expert problem under worst case input?

## 2 Models and Summary of Results

We consider a distributed computation model consisting of one central coordinator node connected to site nodes. The site nodes must communicate with each other using the coordinator node. At each time step, the distributed system receives a query111We do not use the word query in the sense of explicitly giving some information or context, but merely as indication of occurrence of an event that forces some site or coordinator to choose an expert. In particular, if any context is provided in the query the algorithms considered in this paper ignore all context – thus we are in the non-contextual expert setting., which indicates that it must choose an expert to follow. At the end of the round, the distributed system observes the payoff vector. We consider two different models described in detail below: the site prediction model where one of the sites receives a query at any given time-step, and the coordinator prediction model where the query is always received at the coordinator node. In both these models, the payoff vector, , is always observed at one of the site nodes. Thus, some communication is required to share the information about the payoff vectors among nodes. As we shall see, these two models yield different algorithms and performance bounds.

Goal: The algorithm implemented on the distributed system may use randomness, both to decide which expert to pick and to decide when to communicate with other nodes. We focus on simultaneously minimizing the expected regret and the expected communication used by the (distributed) algorithm. Recall, that the expected regret is:

 E[R] =E[maxa∈[n]T∑t=1pt[a]−T∑t=1pt[at],] (1)

where the expectation is over the random choices made by the algorithm. The expected communication is simply the expected number (over the random choices) of messages sent in the system.

As we show in this paper, this is a challenging problem and to keep the analysis simple we focus on bounds in terms of the number of sites and the time horizon , which are often the most important scaling parameters. In particular, our algorithms are variants of follow the perturbed leader () and hence our bounds are not optimal in terms of the number of experts . We believe that the dependence on the number of experts in our algorithms (upper bounds) can be strengthened using a different regularizer. Also, all our lower bounds are shown in terms of and , for . For larger , using techniques similar to Theorem 3.6 in [2] should give the appropriate dependence on .

Adversaries: In the non-stochastic setting, we assume that an adversary may decide the payoff vectors, , at each time-step and also the site, , that receives the payoff vector (and also the query in the site-prediction model). An oblivious adversary cannot see any of the actions of the distributed system, i.e. selection of expert, communication patterns or any random bits used. However, the oblivious adversary may know the description of the algorithm. In addition to knowing the description of the algorithm, an adaptive adversary is stronger and can record all of the past actions of the algorithm, and use these arbitrarily to decide the future payoff vectors and site allocations.

Communication: We do not explicitly account for message sizes. However, since we are interested in scaling with and , we do require that message size should not depend on the number of sites or the number of time-steps , but only on the number of experts . In other words, we assume that is substantially smaller than and . All the messages used in our algorithms contain at most real numbers. As is standard in the distributed systems literature, we assume that communication delay is , i.e. the updates sent by any node are received by the recipients before any future query arrives. All our results still hold under the weaker assumption that the number of queries received by the distributed system in the duration required to complete a broadcast is negligible compared to 222This is because in regularized leader like approaches, if the cumulative payoff vector changes by a small amount the distribution over experts does not change much because of the regularization effect.

We now describe the two models in greater detail, state our main results and discuss related work:

1. Site Prediction Model: At each time step , one of the sites, say , receives a query and has to pick an expert, , from the set, . The payoff vector , where is the payoff of the expert is revealed only to the site and the decision-maker (distributed system) receives payoff

, corresponding to the expert actually chosen. The site prediction model is commonly studied in distributed machine learning settings (see

[7, 8, 9]). The payoff vectors, , and also the choice of sites that receive the query, , are decided by an adversary. There are two very simple algorithms in this model:
(i) Full communication: The coordinator always maintains the current cumulative payoff vector, . At time step , receives the current cumulative payoff vector from the coordinator, chooses an expert using , receives payoff vector and sends to the coordinator, which updates its cumulative payoff vector. Note that the total communication is and the system simulates (non-distributed) to achieve (optimal) regret guarantee .
(ii) No communication: Each site maintains cumulative payoff vectors corresponding to the queries received by them, thus implementing independent versions of . Suppose that the site receives a total of queries (), the regret is bounded by and the total communication is . This upper bound is actually tight, as shown in Lemma 3 (Appendix C.2.1), in the event that there is communication.

Simultaneously achieving regret that is asymptotically lower than using communication asymptotically lower than turns out to be a significantly challenging question. Our main positive result is the first distributed expert algorithm in the oblivious adversarial (non-stochastic) setting, using sub-linear communication. Finding such an algorithm in the case of an adaptive adversary is an interesting open problem.

###### Theorem 1.

When , there exists an algorithm for the distributed experts problem that against an oblivious adversary achieves regret and uses communication , giving non-trivial guarantees in the range .

2. Coordinator Prediction Model: At every time step, the query is received by the coordinator node, which chooses an expert . However, at the end of the round, one of the site nodes, say , observes the payoff vector . The payoff vectors and choice of sites are decided by an adversary. This model is also a natural one and is explored in the distributed systems and streaming literature (see [10, 11, 12] and references therein).

The full communication protocol is equally applicable here getting optimal regret bound, at the cost of substantial (essentially ) communication. But here, we do not have any straightforward algorithms that achieve non-trivial regret without using any communication. This model is closely related to the label-efficient prediction problem (see Chapter 6.1-3 in [2]

), where the decision-maker has a limited budget and has to spend part of its budget to observe any payoff information. The optimal strategy is to request payoff information randomly with probability

at each time-step, if is the communication budget. We refer to this algorithm as (label-efficient forecaster) [13].

###### Theorem 2.

[13] (Informal) The algorithms using with communication budget achieves regret against both an adaptive and an oblivious adversary.

One of the crucial differences between this model and that of the label-efficient setting is that when communication does occur, the site can send cumulative payoff vectors comprising all previous updates to the coordinator rather than just the latest one. The other difference is that, unlike in the label-efficient case, the sites have the knowledge of their local regrets and can use it to decide when to communicate. However, our lower bounds for natural types of algorithms show that these advantages probably do not help to get better guarantees.

Lower Bound Results: In the case of an adaptive adversary, we have an unconditional (for any type of algorithm) lower bound in both the models:

###### Theorem 3.

Let be the number of experts. Then any (distributed) algorithm that achieves expected regret must use communication .

The proof appears in Appendix A. Notice that in the coordinator prediction model, when , this lower bound is matched by the upper bound of .

In the case of an oblivious adversary, our results are weaker, but we can show that certain natural types of algorithms are not applicable directly in this setting. The so called regularized leader algorithms, maintain a cumulative payoff vector, , and use only this and a regularizer to select an expert at time . We consider two variants in the distributed setting:
(i) Distributed Counter Algorithms: Here the forecaster only uses , which is an (approximate) version of the cumulative payoff vector . But we make no assumptions on how the forecaster will use . can be maintained while using sub-linear communication by applying techniques from distributed systems literature [11].
(ii) Delayed Regularized Leader: Here the regularized leaders don’t try to explicitly maintain an approximate version of the cumulative payoff vector. Instead, they may use an arbitrary communication protocol, but make prediction using the cumulative payoff vector (using any past payoff vectors that they could have received) and some regularizer.

We show in Section 3.2 that the distributed counter approach does not yield any non-trivial guarantee in the site-prediction model even against an oblivious adversary. It is possible to show a similar lower bound the in the coordinator prediction model, but is omitted since it follows easily from the idea in the site-prediction model combined with an explicit communication lower bound given in [11].

Section 4 shows that the delayed regularized leader approach does not yield non-trivial guarantees even against an oblivious adversary in the coordinator prediction model, suggesting algorithm is near optimal.

Related Work: Recently there has been significant interest in distributed online learning questions (see for example [7, 8, 9]

). However, these works have focused mainly on stochastic optimization problems. Thus, the techniques used, such as reducing variance through mini-batching, are not applicable to our setting. Questions such as network structure

[8] and network delays [9] are interesting in our setting as well, however, at present our work focuses on establishing some non-trivial regret guarantees in the distributed online non-stochastic experts setting. Study of communication as a resource in distributed learning is also considered in [14, 15, 16]; however, this body of work seems only applicable to offline learning.

The other related work is that of distributed functional monitoring [10] and in particular distributed counting[11, 12], and sketching  [17]. Some of these techniques have been successfully applied in offline machine learning problems [18]. However, we are the first to analyze the performance-communication trade-off of an online learning algorithm in the standard distributed functional monitoring framework [10]. An application of a distributed counter to an online Bayesian regression was proposed in Liu et al. [12]. Our lower bounds discussed below, show that approximate distributed counter techniques do not directly yield non-trivial algorithms.

## 3 Site-prediction model

### 3.1 Upper Bounds

We describe our algorithm that simultaneously achieves non-trivial bounds on expected regret and expected communication. We begin by making two assumptions that simplify the exposition. First, we assume that there are only experts. The generalization from experts to is easy, as discussed in the Remark 1 at the end of this section. Second, we assume that there exists a global query counter, that is available to all sites and the co-ordinator, which keeps track of the total number of queries received across the sites. We discuss this assumption in Remark 2 at the end of the section. As is often the case in online algorithms, we assume that the time horizon is known. Otherwise, the standard doubling trick may be employed. The notation used in this Section is defined in Table 1.

Algorithm Description: Our algorithm is described in Figure 1(a). We make use of algorithm, described in Figure 1(b), which takes as a parameter the amount of added noise . algorithm treats the time steps as  blocks, each of length . At a high level, with probability on any given block the algorithm is in the step phase, running a copy of (with noise parameter ) across all time steps of the block, synchronizing after each time step. Otherwise it is in a block phase, running a copy of (with noise parameter ) across blocks with the same expert being followed for the entire block and synchronizing after each block. This effectively makes , the cumulative payoff over block , the payoff vector for the block . The block has on average total time steps. We begin by stating a (slightly stronger) guarantee for .

###### Lemma 1.

Consider the case . Let be a sequence of payoff vectors such that and let the number of experts be . Then has the following guarantee on expected regret, .

The proof is a simple modification to the proof of the standard analysis [6] and is given in Appendix B for completeness. The rest of this section is devoted to the proof of Lemma 2

###### Lemma 2.

Consider the case . If , Algorithm (Fig. 1) when run with parameters , , and as defined in Fig 1, has expected regret and expected communication . In particular for for , the algorithm simultaneously achieves regret that is asymptotically lower than and communication that is asymptotically lower333Note that here asymptotics is in terms of both parameters and . Getting communication of the form for regret bound better than , seems to be a fairly difficult and interesting problem than .

Since we are in the case of an oblivious adversary, we may assume that the payoff vectors are fixed ahead of time. Without loss of generality let expert (out of ) be the one that has greater payoff in hindsight. Recall that denotes the random variable that is the regret of playing in a step phase on block with respect to the first expert. In particular, this will be negative if expert is the best expert on block , even though globally expert is better. In fact, this is exactly what our algorithm exploits: it gains on regret in the communication-expensive, step phase while saving on communication in the block phase.

The regret can be written as

 R=b∑i=1(Yi⋅FRi1(η′)+(1−Yi)(Pi[1]−Pi[ai]).

Note that the random variables are independent of the random variables and the random variables . As , we can bound the expression for expected regret as follows:

 E[R] ≤qb∑i=1E[FRi1(η′)]+(1−q)b∑i=1E[Pi[1]−Pi[ai]] (2)

We first analyze the second term of the above equation. This is just the regret corresponding to running at the block level, with time steps. Using the fact that , Lemma 1 allows us to conclude that:

 b∑i=1E[Pi[1]−Pi[ai]] ≤ℓηb∑i=1|Pi[1]−Pi[2]|+η (3)

Next, we also analyse the first term of the inequality (2). We chose (see Fig. 1) and the analysis of guarantees that , where denotes the random variable that is the actual regret of , not the regret with respect to expert (which is ). Now either (i.e. expert was the better one on block ), in which case ; otherwise (i.e. expert was the better one on block ), in which case . Note that in this expression is negative. Putting everything together we can write that , where if and otherwise. Thus, we get the main equation for regret.

 E[R] ≤2qb√ℓ−qb∑i=1(Pi[2]−Pi[1])+term 1+ℓηb∑i=1|Pi[1]−Pi[2]|term 2+η (4)

Note that the first (i.e. ) and last (i.e. ) terms of inequality (4) are for the setting of the parameters as in Lemma 2. The strategy is to show that when “term 2” becomes large, then “term 1” is also large in magnitude, but negative, compensating the effect of “term 1”. We consider a few cases:
Case 1: When the best expert is identified quickly and not changed thereafter. Let denote the maximum index, , such that . Note that after the block is processed, the algorithm in the block phase will never follow expert .

Suppose that . We note that the correct bound for “term 2” is now actually since for all .
Case 2 The best expert may not be identified quickly, furthermore is large often. In this case, although “term 2” may be large (when is large), this is compensated by the negative regret in “term 1” in expression (4). This is because if is large often, but the best expert is not identified quickly, there must be enough blocks on which is positive and large.

Notice that . Define and let . Let . We show that . To see this consider and . First, observe that . Then, if , we are done. If not . Now notice that , hence it must be the case that . Now for the value of and if , the negative contribution of “term 1” is at least which greater than the maximum possible positive contribution of “term 2” which is . It is easy to see that these quantities are equal and hence the total contribution of “term 1” and “term 2” together is at most .
Case 3 When is “small” most of the time. In this case the parameter is actually well-tuned (which was not the case when ) and gives us a small overall regret. (See Lemma 1.) We have . Note that and that . In this case “term 2” can be bounded easily as follows:
The above three cases exhaust all possibilities and hence no matter what the nature of the payoff sequence, the expected regret of is bounded by as required. The expected total communication is easily seen to be – the blocks on which step is used contribute communication each, and the blocks where block is used contributed communication each.

###### Remark 1.

Our algorithm can be generalized to experts by recursively dividing the set of experts in two and applying our algorithm to two meta-experts, as shown in Section C.1 in the Appendix. However, the bound obtained in Section C.1 is not optimal in terms of the number of experts, . This observation and Lemma 2 imply Theorem 1.

###### Remark 2.

The assumption that there is a global counter is necessary because our algorithm divides the input into blocks of size . However, it is not an impediment because it is sufficient that the block sizes are in the range . Assuming that the coordinator always signals the beginning and end of the block (by a broadcast which only adds messages to any block), we can use a distributed counter that guarantees a very tight approximation to the number of queries received in each block with at most messages communicated (see  [11]).

### 3.2 Lower Bounds

In this section we give a lower bound on distributed counter algorithms in the site prediction model. Distributed counters allow tight approximation guarantees, i.e. for factor additive approximation, the communication required is only  [11]. We observe that the noise used by is quite large, , and so it is tempting to find a suitable and run using approximate cumulative payoffs. We consider the class of algorithms such that:
(i) Whenever each site receives a query, it has an (approximate) cumulative payoff of each expert to additive accuracy . Furthermore, any communication is only used to maintain such a counter.
(ii) Any site only uses the (approximate) cumulative payoffs and any local information it may have to choose an expert when queried.
However, our negative result shows that even with a highly accurate counter , the non-stochasticity of the payoff sequence may cause any such algorithm to have regret. Furthermore, we show that any distributed algorithm that implements (approximate) counters to additive error on all sites444The approximation guarantee is only required when a site receives a query and has to make a prediction. is at least .

###### Theorem 4.

At any time step , suppose each site has an (approximate) cumulative payoff count, , for every expert such that . Then we have the following:
1. If , any algorithm that uses the approximate counts and any local information at the site making the decision, cannot achieve expected regret asymptotically better than .
2. Any protocol on the distributed system that guarantees that at each time step, each site has a approximate cumulative payoff with probability , uses communication.

## 4 Coordinator-prediction model

In the co-ordinator prediction model, as mentioned earlier it is possible to use the label-efficient forecaster,  (Chap. 6 [2, 13]). Let be an upper bound on the total amount of communication we are allowed to use. The label-efficient predictor translates into the following simple protocol: Whenever a site receives a payoff vector, it will forward that particular payoff to the coordinator with probability . The coordinator will always execute the exponentially weighted forecaster over the sampled subset of payoffs to make new decisions. Here, the expected regret is . In other words, if our regret needs to be , the communication needs to be linear in .

We observe that in principle there is a possibility of better algorithms in this setting for mainly two reasons: (i) when the sites send payoff vectors to the co-ordinator, they can send cumulative payoffs rather than the latest ones, thus giving more information, and (ii) the sites may decided when to communicate as a function of the payoff vectors instead of just randomly. However, we present a lower-bound that shows that for a natural family of algorithms achieving regret requires at least for every , even when . The type of algorithms we consider may have an arbitrary communication protocol, but it satisfies the following: (i) Whenever a site communicates with the coordinator, the site will report its local cumulative payoff vector. (ii) When the coordinator makes a decision, it will execute, , (follow the perturbed leader with noise ) using the latest cumulative payoff vector. The proof of Theorem 5 appears in Appendix D and the results could be generalized to other regularizers.

###### Theorem 5.

Consider the distributed non-stochastic expert problem in coordinator prediction model. Any algorithm of the kind described above that achieves regret must use communication against an oblivious adversary for every constant .

## 5 Simulations

In this section, we describe some simulation results comparing the efficacy of our algorithm with some other techniques. We compare against simple algorithms – full communication and no communication, and two other algorithms which we refer to as mini-batch and . In the mini-batch algorithm, the coordinator requests randomly, with some probability at any time step, all cumulative payoff vectors at all sites. It then broadcasts the sum (across all of the sites) back to the sites, so that all sites have the latest cumulative payoff vector. Whenever such a communication does occur, the cost is . We refer to this as mini-batch because it is similar in spirit to the mini-batch algorithms used in the stochastic optimization problems. In the algorithm, we use the distributed counter technique of Huang et al. [11] to maintain the (approximate) cumulative payoff for each expert. Whenever a counter update occurs, the coordinator must broadcast to all nodes to make sure they have the most current update.

We consider two types of synthetic sequences. The first is a zig-zag sequence, with being the length of one increase/decrease. For the first time steps the payoff vector is always (expert 1 being better), then for the next time steps, the payoff vector is (expert 2 is better), and then again for the next time-steps, payoff vector is and so on. The zig-zag sequence is also the sequence used in the proof of the lower bound in Theorem 5

. The second is a two-state Markov chain (MC) with states

and . While in state , the payoff vector is and when in state it is .

In our simulations we use predictions, and sites. Fig. 2 (a) shows the performance of the above algorithms for the MC sequences, the results are averaged across runs, over both the randomness of the MC and the algorithms. Fig. 2 (b) shows the worst-case cumulative communication vs the worst-case cumulative regret trade-off for three algorithms: , mini-batch and , over all the described sequences. While in general it is hard to compare algorithms on non-stochastic inputs, our results confirm that for non-stochastic sequences inspired by the lower-bounds in the paper, our algorithm outperforms other related techniques.

## References

• [1] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learnign and an application to boosting. In EuroCOLT, 1995.
• [2] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006.
• [3] T. Cover. Universal portfolios. Mathematical Finance, 1:1–19, 1991.
• [4] E. Hazan and S. Kale. On stochastic and worst-case models for investing. In NIPS, 2009.
• [5] E. Hazan. The convex optimization approach to regret minimization. Optimization for Machine Learning, 2012.
• [6] A. Kalai and S. Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71:291–307, 2005.
• [7] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction. In ICML, 2011.
• [8] J. Duchi, A. Agarwal, and M. Wainright. Distributed dual averaging in networks. In NIPS, 2010.
• [9] A. Agarwal and J. Duchi. Distributed delayed stochastic optimization. In NIPS, 2011.
• [10] G. Cormode, S. Muthukrishnan, and K. Yi. Algorithms for distributed functional monitoring. ACM Transactions on Algorithms, 7, 2011.
• [11] Z. Huang, K. Yi, and Q. Zhang. Randomized algorithms for tracking distributed count, frequencies and ranks. In PODS, 2012.
• [12] Z. Liu, B. Radunović, and M. Vojnović. Continuous distributed counting for non-monotone streams. In PODS, 2012.
• [13] N. Cesa-Bianchi, G. Lugosi, and G. Stoltz. Minimizing regret with label efficient prediction. In ISIT, 2005.
• [14] M-F. Balcan, A. Blum, S. Fine, and Y. Mansour. Distributed learning, communication complexity and privacy. In COLT (to appear), 2012.
• [15] H. Daumé III, J. M. Phillips, A. Saha, and S. Venkatasubramanian.

Protocols for learning classifiers on distributed data.

In AISTATS, 2012.
• [16] H. Daumé III, J. M. Phillips, A. Saha, and S. Venkatasubramanian. Efficients protocols for distributed classification and optimization. In arXiv:1204.3523v1, 2012.
• [17] G. Cormode, M. Garofalakis, P. Haas, and C. Jermaine. Synopses for Massive Data - Samples, Histograms, Wavelets, Sketches. Foundations and Trends in Databases, 2012.
• [18] K. Clarkson, E. Hazan, and D. Woodruff. Sublinear optimization for machine learning. In FOCS, 2010.

This section contains a proof of Theorem 3. The proof makes use of Khinchine’s inequality (see Appendix A.1.14 in [2]).

###### Khinchine’s Inequality.

Let be Rademacher random variables, i.e. . Then for any real numbers ,

 E[|n∑i=1aiσi|]≥1√2 ⎷n∑i=1a2i=1√2  ⎷E⎡⎣(n∑i=1aiσi)2⎤⎦
###### Proof of Theorem 3.

The adaptive adversary divides the total time steps into time blocks, each consisting of time-steps. During each block of time-steps, each of the sites receives exactly query. At time , the adversary tosses an unbiased coin. Let denote the payoff vector corresponding to heads, where and . Similarly let (corresponding to tails) be such that and . For and , the adaptive adversary does the following: At time , if there was no communication on part of the decision maker (distributed system) between time steps – then if the coin toss at time was heads the payoff vector is , otherwise it is . On the other hand if there was any communication, then the adaptive adversary tosses a random coin and sets the payoff vector accordingly.

Consider the expected payoff of the algorithm: At time , if there was communication between time steps to , then the adversary has chosen the payoff vector uniformly at random between and and hence the expected reward at time step is exactly . On the other hand if there was no communication between these time steps, then the site making the decision has no information about the coin toss of the adversary at time , and hence the expected reward is still . Thus, the total expected reward of the algorithm (by linearity of expectation) is .

Note that,

 E[maxi=1,2T∑t=1pt[i]] =12(E[T∑t=1pt[1]+pt[2]]+E[|T∑t=1(pt[1]−pt[2])|]) =T2+12E[|T∑t=1(pt[1]−pt[2])|] (5) Let I⊆[T/k] be the indices of the blocks for which there was some communication. Consider blocks in I and those outside of I. Suppose the block (i−1)k+1,…,ik is such that i∉I, then |∑t=ikt=(i−1)k+1pt[1]−pt[2]|=k. Note that all such block sums (as random variables) are independent of all other block sums. For some block (i−1)k+1,…,ik such that i∈I, let c(i) be such the first such that communication occurs at block (i−1)k+c(i). Then |∑t=(i−1)k+c(i)t=(i−1)k+1pt[1]−pt[2]|=c(i), also note that pt for t=(i−1)k+c(i)+1,…,ik are all based on independent coin tosses. Then note that, T∑t=1pt[1]−pt[2] =∑i∉Ikσi,1+∑i∈I(c(i)σi,1+k∑j=c(i)+1σi,j), (6) where σi,j are the Rademacher variables corresponding to the coin tosses of the adversary at time step (i−1)k+j. Also note that, E⎡⎣(T∑t=1pt[1]−pt[2])2⎤⎦ ≥(Tk−|I|)k2 Then, Khinchine’s inequality and (5) gives us that E[maxi=1,2T∑t=1pt[i]] ≥T2+12√2  ⎷E⎡⎣(T∑t=1pt[1]−pt[2])2⎤⎦ ≥T2+12√2√(Tk−|I|)k2

Now, unless , it must be the case that leading to total expected regret . Hence, any algorithm that achieves regret must have communication . ∎

###### Proof of Lemma 1.

We first note that using the given notation, the regret guarantee of (see Fig. 1(b)) is

 E[R] ≤BηT∑t=1|pt|1+η

The above appears in the analysis of Kalai and Vempala  [6]. Note that although ( in our setting), we can use the following trick. We first observe that since only depends on the difference between the cumulative payoffs of the two experts, we may replace the payoff vectors by , where
(i) if , and (ii) if , and

Next, we observe that the regret of with payoff sequence and is identically distributed, since the random choices only depend on the difference between the cumulative payoffs at any time. Lastly, we note that , which completes the proof. ∎

## Appendix C Site Prediction : Missing Proofs

### c.1 Generalizing DFPL to n experts

In this section, we generalize our algorithm for two experts to handle experts. Lemma 2 showed that algorithm , in the setting of two experts, guarantees that the expected regret is at most , where is a universal constant.

Our generalization follows a recursive approach. Suppose that some algorithm can achieve expected regret, with experts, we show that we can construct algorithm that achieves expected regret, with experts as follows: We run 2 independent copies of (say and ) such that only deals with the first experts and with the rest of the experts . Then our algorithm treats and as 2 experts and runs the algorithm (Section 3.1) over these two experts. The analysis for regret is straightforward:

Let the regret for be and the regret for be . We have

 E[Payoff(A1)]≥maxi∈[n]∑t≤Tpt[i]−E[R1] and E[Payoff(A2)]≥maxi∈{n+1,...,2n}∑t≤Tpt[i]−E[R2].

We know that and .

Next, we can see that

 E[Payoff(A′)∣Payoff(A1),Payoff(A2)] We can use the above expression to conclude (taking expectations) that E[Payoff(A′)] ≥E[Payoff(A1)]−c0√ℓ5/6T E[Payoff(A′)] ≥E[Payoff(A2)]−c0√ℓ5/6T But using the above two inequalities we can conclude that E[Payoff(A′)] ≤maxi∈[2n]∑t≤Tpt[i]−c0(log(n)+1)√l5/6T

This immediately implies that for experts (starting from base case of where works), this recursive approach results in an algorithm for experts achieves regret . In order to analyze the communication, we observe that in order to implement the algorithm correctly, when algorithm (which is at some depth in the recursion) decides to communicate at each time step on a block, the communication on that block is . There are at most copies of running (depth of the recursion is ). However, the corresponding term in the communication bound is lower than the term arising from blocks where communication occurs only at the beginning and end of block, . Thus, the expected communication (in terms of number of messages) is asymptotically the same as in the case of experts. If we count communication complexity as the cost of sending real number, instead of one message, then the total communication cost is .

### c.2 Lower Bounds

#### c.2.1 No Communication Protocol

In the site-prediction setting, we show that any algorithm that uses no communication must achieve regret on some sequence. The proof is quite simple, but does not follow directly from the lower-bound of the non-distributed case, because although the sites each run a copy of some -like algorithm, the best expert might be different across the sites. We only consider the case when , since we are more interested in dependence on and .

###### Lemma 3.

If no-communication protocol is used in the site-prediction model expected regret achieved by any algorithm is at least .

###### Proof.

The oblivious adversary does the following: Divide time steps into blocks of size . For each block, toss a random coin and set the payoff vector to be for heads or for tails. And each query in a block is assigned to one site (say in a cyclic fashion). Note that the expected reward of any algorithm that does not use any communication is . Because, no site at any time can perform better than random guessing. But the standard analysis shows that for the sequence as constructed above . ∎

#### c.2.2 Lower Bound using Distributed Counter

This section contains proof of Theorem 4.

###### Proof of Theorem 4.

Part 1: The oblivious adversary decides to only use out of the sites. The adversary divides the input sequence into blocks, each block of size . For each block, the adversary tosses an unbiased coin and sets the payoff vector or according to whether the coin toss resulted in heads or tails. Let , where is largest such that and for some integer (i.e. is the time at the end of the block). Note that , so is a valid (approximate) value of the cumulative payoff of action . However, since the payoff vectors across the blocks are completely uncorrelated and each site makes a decision only once in each block, the expected reward at any time step is , and overall expected reward is .

Note, that it is easy to show that using standard techniques. Thus the expected regret is at least .

Part 2: Let . Now consider the input sequence that is all . But that this is divided into blocks of size . For each block, the oblivious adversary chooses a random permutation of and allocates the to the site in that order. Note that when the site receives a , it is required to have an

-approximate value to the current count. Suppose there was no communication since this site last received a query, then at that time the estimate at this site was at most

. Now, depending on where in the permutation the site is it may be required to have a value in any of the intervals . There are at least disjoint intervals in this state and each of them are equally probable. Thus with probability at least , in the absence of any communication, this site fails to have the correct approximate estimate.

If on the other hand, every site does communicate at least once every time it receives a query. The total communication is at least . ∎

## Appendix D Proof of Theorem 5

###### Proof of Theorem 5.

To prove Theorem 5, we construct a set of reward sequences , and show that any FPL-like algorithm (as described in Section 4), will have regret on least one of these sequences unless the communication is essentially linear in .

Before we start the actual analysis, we need to introduce some more notation. First, recall that is an upper bound on the amount of communication allowed in the protocol. We shall focus reward sequences where at any time-step exactly one of the experts receives payoff and the other expert receives payoff , i.e. for any . Let , and let . Thus, we note that the payoff vectors , the function , and the function all encode equivalent information regarding payoffs as a function of time.

Suppose, is an algorithm that achieves optimal regret under the communication bound . Let denote the random coin tosses used by, . Thus we may think of as being a string of length fixed ahead of time. Let , …, be a specific input sequence. Let denote the time-steps when communication occurs. We note that may depend on which is a prefix of the (random) string , which the algorithm observes until time-step and may also depend on the payoff vectors .

Next, we describe the set of reward sequences to “fool” the algorithm. Let be a parameter that will be fixed later. We construct up to possible payoff sequences. We denote this payoff sequences as . These sequences are constructed as follows:

• : Let denote a sequence of consecutive ’s and denote a sequence of consecutive ’s. Then the sequence is defined to be the sequence , i.e. if is even and if

is odd. Furthermore, we assume that

for some integer . This means that , i.e. eventually expert will be the better expert.

• for and even: In this payoff sequence, the payoff vectors for the first time-steps will be identical to those in . For the rest of the time-steps the payoff vector will always be , i.e. the first expert always receives a unit payoff for . Thus, for sequences of this form, where is even, expert will be the better expert.

• for and odd: In this payoff sequence, the payoff vectors for the first