1 Introduction
Distributed machine learning has received an increasing amount of attention in this “big data” era
[16]. The most common use case of distributed learning is when the data cannot fit into a single machine, or when one wants to speed up the training process by utilizing parallel computation of multiple machines [21, 25, 26]. In these cases, one can usually freely distribute the data across entities, and an evenly distributed partition would be a natural choice.In this paper, we consider a different setting where the data is inherently distributed across different locations or entities. Examples of this scenario include scientific data gathered by different teams, or customer information of a multinational corporation obtained in different countries. The goal is to design an efficient learning algorithm with a low generalization error over the union of the data. Note that the distribution of the data from each source may be very different. Therefore, to deal with the worstcase situation, we assume the data can be adversarially partitioned. This scenario has been studied for different tasks, such as supervised learning
[1, 4, 8][2, 3], and optimization [6, 15].Traditional machine learning algorithms often only care about sample complexity and computational complexity. However, since the bottleneck in the distributed setting is often the communication between machines [1], the theoretical analysis in this paper will focus on communication complexity. A baseline approach in this setting would be to uniformly sample examples from each entity and perform centralized learning at the center. By the standard VCtheory, a sampling set of size is sufficient. The communication complexity of this approach is thus examples.
More advanced algorithms with better communication complexities have been proposed in recent works [1, 8]. For example, [1] proposes a generic distributed boosting algorithm that achieves communication with only logarithmic dependence on
for any concept class. Unfortunately, their method only works in the standard realizable PAClearning setting, where the data can be perfectly classified by a function in the hypothesis set and is noiseless. This is because many boosting algorithms are vulnerable to noise
[9, 22]. The realizable case is often unrealistic in realworld problems. Therefore, we consider the more general agnostic learning setting [20], where there is no assumption on the target function. Since it is impossible to achieve an arbitrary error rate , the goal in this setting is to find a hypothesis with error rate close to , the minimum error rate achievable within the hypothesis set . The error bound is often in the form of . Balcan et al. [1] propose an algorithm based on the robust generalized halving algorithm with communication complexity of examples. However, the algorithm works only for a finite hypothesis set and is computationally inefficient.We propose a new distributed boosting algorithm that works in the agnostic learning setting. While our algorithm can handle this much more difficult and more realistic scenario, it enjoys the same communication complexity as in [1] that is logarithmic in and exponentially better than the natural baselines. The algorithm is computationally efficient and works for any concept class with a finite VCdimension. The key insight, inspired by [1], is that a constant (independent of ) number of examples suffice to learn a weak hypothesis, and thus if the boosting algorithm only needs iterations, we obtain the desired result.
A key challenge in this approach is that most agnostic boosting algorithms either have poor error bound guarantees or require too many iterations. The first agnostic boosting algorithm was proposed in [5]. Although the number of iterations is and is asymptotically optimal, their bound on the final error rate is much weaker: instead of , the bound is , where . Some subsequent works [19, 13] significantly improve the bound on the error rate. However, their algorithms all require iterations, which can in turn result in communication in the distributed setting. Fortunately, we identify a very special boosting algorithm [18] that runs in iterations. This algorithm was analyzed in the realizable case in the original paper, but has later been noted to be able to work in the agnostic setting [10]^{1}^{1}1In the prior version of this paper which appeared in AISTATS 2016, we claimed that we were the first to show its guarantees in the agnostic setting. We thank the correction from the author of [10] that, although not explicitly proved or shown as a theorem, the feasibility of the algorithm in the agnostic setting has already been discussed in [10].. We show how to adapt it to the distributed setting and obtain a communication efficient distributed learning algorithm with good agnostic learning error bound. Our main contributions are summarized as follows.

We identify a centralized agnostic boosting algorithm and show that it can be elegantly adapted to the distributed setting. This results in the first algorithm that is both computationally efficient and communication efficient to learn a general concept class in the distributed agnostic learning setting.

Our proposed algorithm, which is a boostingbased approach, is flexible in that it can be used with various weak learners. Furthermore, the weak learner only needs to work in the traditional centralized setting rather than in the more challenging distributed setting. This makes it much easier to design new algorithms for different concept classes in the distributed setting.

We confirm our theoretical results by empirically comparing our algorithm to the existing distributed boosting algorithm [1]. It does much better on the synthetic dataset and achieves promising results on realworld datasets as well.
2 Problem Setup
We first introduce agnostic learning as a special case of the general statistical learning problem. Then, we discuss the extension of the problem to the distributed setting, where the data is adversarially partitioned.
2.1 Statistical learning problem
In statistical learning, we have access to a sampling oracle according to some probability distribution
over . The goal of a learning algorithm is to output a hypothesis with a low error rate with respect to , defined as . Often, we compare the error rate to the minimum achievable value within a hypothesis set , denoted by . More precisely, a common error bound is in the following form.(1) 
for some constant and an arbitrary error parameter .
Many efficient learning algorithms have been proposed for the realizable case, where the target function is in and thus . In this paper, we consider the more general case where we do not have any assumption on the value of . This is often called the agnostic learning setting [20] . Ideally, we want in the bound to be as close to one as possible. However, for some hypothesis set , achieving such a bound with is known to be NPhard [11].
2.2 Extension to the distributed setting
In this work, we consider the agnostic learning problem in the distributed learning framework proposed by [1]. In this framework, we have entities. Each entity has access to a sampling oracle according to a distribution over . There is also a center which can communicate with the entities and acts as a coordinator. The goal is to learn a good hypothesis with respect to the overall distribution without too much communication among entities. It is convenient to calculate the communication by words. For example, a
dimensional vector counts as
words.Main goal. The problem we want to solve in this paper is to design an algorithm that achieves error bound (1) for a general concept class . The communication complexity should depend only logarithmically on .
3 Distributed agnostic boosting
In this work, we show a distributed boosting algorithm for any concept class with a finite VCdimension . In the realizable PAC setting, the boosting algorithm is assumed to have access to a weak learner that, under any distribution, finds a hypothesis with error rate at most . This assumption is unrealistic in the agnostic setting since even the best hypothesis in the hypothesis set can perform poorly. Instead, following the setting of [5], the boosting algorithm is assumed to have access to a weak agnostic learner defined as follows.
Definition 1.
A weak agnostic learner, given any probability distribution , will return a hypothesis with error rate
Detailed discussion of the existence of such weak learners can be found in [5]. Since error of can be trivially achieved, in order for the weak learner to convey meaningful information, we assume . Some prior works use different definitions. For example, [17] uses the definition of weak learner. That definition is stronger than ours, since an weak learner in that paper implies a weak learner in our paper with . Therefore, our results still hold by using their definition. Below we show an efficient agnostic boosting algorithm in the centralized setting.
3.1 Agnostic boosting: centralized version
The main reason why many boosting algorithms (including AdaBoost [12] and weightbased boosting [23, 24]) fail in the agnostic setting is that they tend to update the example weights aggressively and may end up putting too much weight on noisy examples.
To overcome this, we consider a smoothed boosting algorithm [18], shown in Algorithm 1. This algorithm uses at most iterations and enjoys a nice “smoothness” property, which is shown to be helpful in the agnostic setting [13]. The algorithm was originally analyzed in the realizable case but has later been noted to be able to work in the agnostic setting [10]. Below, for completeness we show the analyses of the algorithm in both the realizable and agnostic settings.
The boosting algorithm adjusts the example weights using the standard multiplicative weight update rule. The main difference is that it performs an additional Bregman projection step of the current example weight distribution into a convex set after each boosting iteration. The Bregman projection is a general projection technique that finds a point in the feasible set with the smallest “distance” to the original point in terms of Bregman divergence. Here we use a particular Bregman divergence called relative entropy for two distributions and . To ensure that the boosting algorithm always generates a “smooth” distribution, we set the feasible set to be the set of all smooth distributions, which is defined as follows.
Definition 2.
A distribution on is called smooth if ,
It is easy to verify that is a convex set. The complete boosting algorithm is shown in Algorithm 1 and the theoretical guarantee in Theorem 1. The proof, included in the appendix, is similar to the one in [18], except that they use realvalued weak learners, whereas here we only consider binary hypotheses for simplicity.

Call the weak learner with distribution and obtain a hypothesis

Update the example weights
where and is the normalization factor.

Project into the feasible set of smooth distributions
Theorem 1.
Given a sample and access to a weak learner, Algorithm 1 makes at most calls to the weak learner with smooth distributions and achieves error rate on .
Note that in Theorem 1, it is not explicitly assumed to be in the realizable case. In other words, If we have a weak learner in the agnostic setting, we can achieve the same guarantee. However, in the agnostic setting, we only have access to a weak agnostic learner, which is a much weaker and more realistic assumption. The next theorem shows the error bound we get under this usual assumption in the agnostic setting.
Theorem 2.
Given a sample and access to a weak agnostic learner, Algorithm 1 uses at most iterations and achieves an error rate on , where is the optimal error rate on achievable using the hypothesis class .
Proof.
The idea is to show that as long as the boosting algorithm always generates some smooth distributions, the weak agnostic learner is actually a weak learner for some , i.e., it achieves error rate for any smooth distributions. In each iteration , the weak agnostic learner, given with distribution , returns a hypothesis such that
The second inequality utilizes the smoothness property of . The reason is that if is the optimal hypothesis on , we have
Let , or equivalently . Then, if , we have . Therefore, we can use Theorem 1, and achieves error rate on by using iterations. Alternatively, it achieves error rate by using iterations. ∎
Next, we show how to adapt this algorithm to the distributed setting.
3.2 Agnostic boosting: distributed version
The technique of adapting a boosting algorithm to the distributed setting is inspired by [1]. They claim that any weightbased boosting algorithm can be turned into a distributed boosting algorithm with communication complexity that depends linearly on the number of iterations in the original boosting algorithm. However, their result is not directly applicable to our boosting algorithm due to the additional projection step. We will describe our distributed boosting algorithm by showing how to simulate the three steps in each iteration of Algorithm 1 in the distributed setting with words of communication. Then, since there are at most iterations, the desired result follows.
In step 1, in order to obtain a weak hypothesis (we use instead of for convenience, which only affects the constant terms), the center calls the weak agnostic learner on a dataset sampled from . The sampling procedure is as follows. Each entity first sends its sum of weights to the center. Then, the center samples examples in total across the entities proportional to their sum of weights. By the standard VCtheory, the error rate of any hypothesis on the the sample is within to the true error rate with respect to the underlying distribution, with high probability. It is thus sufficient to find a hypothesis with error within to the best hypothesis, which can be done thanks to the assumed weak learner.
Step 2 is relatively straightforward. The center broadcasts and each entity updates its own internal weights independently. Each entity then sends the summation of internal weights to the center for the calculation of the normalization factor. The communication in this step is for sending and some numbers. What is left is to show that the projection in step 3 can be done in a communication efficient way. As shown in [14], the projection using relative entropy as the distance into , the set of all smooth distributions, can be done by the following simple algorithm.
For a fixed index , we first clip the largest coordinates of to , and then rescale the rest of the coordinates to sum up to . We find the least index such that the resulting distribution is in , i.e. all the coordinates are at most . A naive algorithm by first sorting the coordinates takes time, but it is communicationally inefficient.
Fortunately, [14] also proposes a more advanced algorithm by recursively finding the median. The idea is to use the median as the threshold, which corresponds to a potential index , i.e., is the number of coordinates larger than the median. We then use a binary search to find the least index . The distributed version of the algorithm is shown in Algorithm 2.
Theorem 3.
Algorithm 2 projects a dimensional distribution into the set of all smooth distributions with words of total communication complexity.
Proof.
Since Algorithm 2 is a direct adaptation of the centralized projection algorithm in [14], we omit the proof of its correctness. Because we use a binary search over possible thresholds, the algorithm runs at most iteration. Therefore, it suffices to show that the communication complexity of finding the median is at most . This can be done by the iterative procedure shown in Algorithm 3. Each entity first sends its own median to the center. The center identifies the maximum and minimum local medians, denoted as and , respectively. The global median must be between and , and removing the same number of elements larger than or equal to and less than will not change the median. Therefore, the center can notify the two corresponding entities and let them remove the same number of elements. At least one entity will reduce its size by half, so the algorithm stops after iterations. Note that except for the first round, we only need to communicate the updated medians of two entities at each round, so the overall communication complexity is words.
In practice, it is often easier and more efficient to use a quickselectbased distributed algorithm to find the median. The idea is to randomly select and broadcast a weight at each iteration. This, in expectation, can remove half of the possible median candidates. This approach achieves the same communication complexity in expectation. ∎
The complete distributed agnostic boosting algorithm is shown in Algorithm 4. We summarize our theoretical results in the next Theorem.
Theorem 4.
Given access to a weak agnostic learner, Algorithm 4 achieves error rate by using at most rounds, each involving examples and an additional words of communication per round.
Proof.
The boosting algorithm starts by drawing from a sample of size across the entities without communicating them. If is a centralized dataset, then by Theorem 2 we know that Algorithm 1 achieves error rate on using iterations. We have shown that Algorithm 4 is a correct simulation of Algorithm 1 in the distributed setting, and thus we achieve the same error bound on . The number of communication rounds is the same as the number of iterations of the boosting algorithm. And in each round, the communication includes examples for finding the weak hypothesis, words for broadcasting the hypothesis and some numbers, and words for the distributed Bregman projection.
So far we only have the error bound of on . To obtain the generalization error bound, note that with and by the standard VCdimension argument, we have that with high probability , and the generalization error of our final hypothesis deviates from the empirical error by at most , which completes the proof with the desired generalization error bound. ∎
4 Experiments
In this section, we compare the empirical performance of the proposed distributed boosting algorithms with two other algorithms on synthetic and realworld datasets. The first one is distributed AdaBoost [1]
, which is similar to our algorithm but without the projection step. The second one is the distributed logistic regression algorithm available in the MPI implementation of the Liblinear package
[27]. We choose it as a comparison to a nonboosting approach. Note that Liblinear is a highlyoptimized package while our implementation is not, so the comparison in terms of speed is not absolutely fair. However, we show that our approach, grounded in a rigorous framework, is comparable to this leading method in practice.4.1 Experiment setup
All three algorithms are implemented in C using MPI, and all the experiments are run on Amazon EC2 with 16 m3.large machines. The data is uniformly partitioned across 16 machines. All the results are averaged over 10 independent trials. Logistic regression is a deterministic algorithm, so we do not show the standard deviation of the error rate. We however still run it for 10 times to get the average running time. Since each algorithm has different number of parameters, for fairness, we do not tune the parameters. For the two boosting algorithms, we use
decision stumps as our weak learners and set and in all experiments. For logistic regression, we use the default parameter .4.2 Synthetic dataset
We use the synthetic dataset from [22]. This dataset has an interesting theoretical property that although it is linearly separable, by randomly flipping a tiny fraction of labels, all convex potential boosting algorithms, including AdaBoost, fail to learn well. A random example is generated as follows. The label is randomly chosen from
with equal odds. The feature
, where , is sampled from a mixture distribution: 1) With probability , set all to be equal to . 2) With probability , set and . 3) With probability , randomly set 5 coordinates from the first 11 and 6 coordinates from the last 10 to be equal to . Set the remaining coordinates to .We generate 1,600,000 examples in total for training on 16 machines and test on a separate set of size 100,000. The results are shown in Table 1. One can see that our approach (Dist.SmoothBoost), is more resistant to noise than Dist.AdaBoost and significantly outperforms it for having upto noise. In high noise setting (), Liblinear performs poorly, while our approach achieves the best error rate.
Noise  Dist.AdaBoost  Dist.SmoothBoost  LiblinearLR 

0.1%  11.64 3.82  4.28 0.66  0.00 
1%  25.97 1.56  13.38 4.66  0.00 
10%  28.04 0.94  27.07 1.60  37.67 
Dataset  examples  features  Dist.AdaBoost  Dist.SmoothBoost  LiblinearLR 

Adult  48,842  123  15.71 0.16  15.07 2.32  15.36 
Ijcnn1  141,691  22  5.90 0.10  4.33 0.18  7.57 
CodRNA  488,565  8  6.12 0.09  6.51 0.11  11.79 
Covtype  581,012  54  24.98 0.22  24.68 0.30  24.52 
Yahoo  3,251,378  10  37.08 0.15  36.86 0.27  39.15 

4.3 Realworld datasets
We run the experiments on 5 realworld datasets with sizes ranging from 50 thousands to over 3 millions: Adult, Ijcnn1, CodRNA, and Covtype from the LibSVM data repository ^{2}^{2}2http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets.; Yahoo from the Yahoo! WebScope dataset [7]. The Yahoo dataset is used for predicting whether a user will click the news article on their front page. It contains user click logs and is extremely imbalanced. We trim down this dataset so that the number of positive and negative examples are the same. The detailed information of the datasets are summarized in Table 2. Each dataset is randomly split into 4/5 for the training set and 1/5 for the testing set.
The average error rate and the total running time are summarized in Table 2 and Table 3, respectively. The bold entries indicates the best error rate. Our approach outperforms the other two on 3 datasets and performs competitively on the other 2 datasets. In terms of running time, Liblinear is the fastest on all datasets. However, the communication of our algorithm only depends on the dimension , so even for the largest dataset (Yahoo), it can still finish within 4 seconds. Therefore, our algorithm is suitable for many realworld situations where the number of examples is much larger than the dimension of the data. Furthermore, our algorithm can be used with more advanced weak learners, such as distributed logistic regression, to further reduce the running time.
Dataset  Dist.AdaBoost  Dist.SmoothBoost  LiblinearLR 

Adult  5.02  15.54  0.06 
Ijcnn1 
0.76  9.19  0.10 
CodRNA 
1.08  10.11  0.12 
Covtype 
3.71  6.48  0.31 
Yahoo 
3.37  3.79  1.37 

5 Conclusions
We propose the first distributed boosting algorithm that enjoys strong performance guarantees, being simultaneously noise tolerant, communication efficient, and computationally efficient; furthermore, it is quite flexible in that it can used with a variety of weak learners. This improves over the prior work of [1, 8] that were either communication efficient only in noisefree scenarios or computationally prohibitive. While enjoying nice theoretical guarantees, our algorithm also shows promising empirical results on large synthetic and realworld datasets.
Finally, we raise some related open questions. In this work we assumed a star topology, i.e., the center can communicate with all players directly. An interesting open question is to extend our results to general communication topologies. Another concrete open question is reducing the constant in our error bound while maintaining good communication complexity. Finally, our approach uses centralized weak learners for learning general concept classes, so the computation is mostly done in the center. Are there efficient distributed weak learners for some specific concept classes? That could provide a more computation balanced distributed learning procedure that enjoys strong communication complexity as well.
Acknowledgments
This work was supported in part by NSF grants CCF1101283, CCF1451177, CCF1422910, TWC1526254, IIS1217559, IIS1563816, ONR grant N000140910751, and AFOSR grant FA95500910538. We also thank Amazon’s AWS in Education grant program for providing the Amazon Web Services. We thank Vitaly Feldman for useful discussions and valuable comments.
References
 [1] MariaFlorina Balcan, Avrim Blum, Shai Fine, and Yishay Mansour. Distributed learning, communication complexity and privacy. In Proceedings of COLT, 2012.

[2]
MariaFlorina Balcan, Steven Ehrlich, and Yingyu Liang.
Distributed kmeans and kmedian clustering on general communication topologies.
In Proceedings of NIPS, 2013. 
[3]
MariaFlorina Balcan, Yingyu Liang, Vandana Kanchanapally, and David Woodruff.
Improved distributed principal component analysis.
In Proceedings of NIPS, pages 3113–3121, 2014.  [4] Aurélien Bellet, Yingyu Liang, Alireza Bagheri Garakani, MariaFlorina Balcan, and Fei Sha. Distributed frankwolfe algorithm: A unified framework for communicationefficient sparse learning. In Proceedings of SDM, 2015.
 [5] Shai BenDavid, Philip M Long, and Yishay Mansour. Agnostic boosting. In Computational Learning Theory, pages 507–516. Springer, 2001.
 [6] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn., 3(1):1–122, 2011.
 [7] Wei Chu, SeungTaek Park, Todd Beaupre, Nitin Motgi, Amit Phadke, Seinjuti Chakraborty, and Joe Zachariah. A case study of behaviordriven conjoint analysis on yahoo!: Front page today module. In Proceedings of KDD, pages 1097–1104. ACM, 2009.
 [8] Hal Daumé, Jeff M. Phillips, Avishek Saha, and Suresh Venkatasubramanian. Efficient protocols for distributed classification and optimization. In Proceedings of the 23rd International Conference on Algorithmic Learning Theory, ALT’12, pages 154–168, 2012.

[9]
Thomas G. Dietterich.
An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization.
Mach. Learn., 40(2):139–157, 2000.  [10] Vitaly Feldman. Distributionspecific agnostic boosting. In ICS, pages 241–250, 2010.
 [11] Vitaly Feldman, Venkatesan Guruswami, Prasad Raghavendra, and Yi Wu. Agnostic learning of monomials by halfspaces is hard. In Proceedings of the 50th Annual IEEE Symposium on Foundations of Computer Science, FOCS ’09, pages 385–394, 2009.
 [12] Yoav Freund and Rob Schapire. A decisiontheoretic generalization of online learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997.
 [13] Dmitry Gavinsky. Optimallysmooth adaptive boosting and application to agnostic learning. J. Mach. Learn. Res., 4:101–117, 2003.
 [14] Mark Herbster and Manfred K. Warmuth. Tracking the best linear predictor. J. Mach. Learn. Res., 1:281–309, 2001.
 [15] Martin Jaggi, Virginia Smith, Martin Takác, Jonathan Terhorst, Sanjay Krishnan, Thomas Hofmann, and Michael I Jordan. Communicationefficient distributed dual coordinate ascent. In Proceedings of NIPS, pages 3068–3076, 2014.
 [16] MI Jordan and TM Mitchell. Machine learning: Trends, perspectives, and prospects. Science, 349(6245):255–260, 2015.
 [17] Adam Tauman Kalai, Yishay Mansour, and Elad Verbin. On agnostic boosting and parity learning. In Proceedings of STOC, pages 629–638. ACM, 2008.
 [18] Satyen Kale. Boosting and hardcore set constructions: a simplified approach. Electronic Colloquium on Computational Complexity (ECCC), 14(131), 2007.
 [19] Varun Kanade and Adam Kalai. Potentialbased agnostic boosting. In Proceedings of NIPS, pages 880–888, 2009.
 [20] Michael J Kearns, Robert E Schapire, and Linda M Sellie. Toward efficient agnostic learning. Machine Learning, 17(23):115–141, 1994.
 [21] Mu Li, Dave Andersen, Alex Smola, and Kai Yu. Parameter server for distributed machine learning. In Proceedings of NIPS, 2014.
 [22] Phil Long and Rocco A Servedio. Random classification noise defeats all convex potential boosters. In Proceedings of ICML, pages 608–615. ACM, 2008.
 [23] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. MIT press, 2012.
 [24] Shai ShalevShwartz and Shai BenDavid. Understanding Machine Learning. Cambridge University Press., 2014.

[25]
Yuchen Zhang, John Duchi, Michael Jordan, and Martin Wainwright.
Informationtheoretic lower bounds for distributed statistical estimation with communication constraints.
In Proceedings of NIPS, 2013.  [26] Yuchen Zhang, John C. Duchi, and Martin Wainwright. Communicationefficient algorithms for statistical optimization. In Proceedings of NIPS, 2012.
 [27] Yong Zhuang, WeiSheng Chin, YuChin Juan, and ChihJen Lin. Distributed newton methods for regularized logistic regression. In Proceedings of the PAKDD, pages 690–703, 2015.
Appendix A Proof of Theorem 1
Theorem 1.
Given a sample and access to a weak learner, Algorithm 1 makes at most calls to the weak learner with smooth distributions and achieves error rate on .
Proof.
The analysis is based on the wellstudied online learning from experts problem. In each round , the learner has to make a decision based on the advice of experts. More precisely, the learner chooses a distribution from a convex feasible set and follows the advice of the th expert with probability . Then, the losses of each expert’s suggested actions are revealed as a vector . The expected loss of the learner incurred by using is thus . The goal is to achieve a total expected loss not too much more than , the cost of always using the best fixed distribution in . Step 2 and 3 of Algorithm 1, which is also known as the multiplicative weights update algorithm, has the following regret bound [14].
Lemma 1.
For any and any positive integer , the multiplicative weights update algorithm generates distributions where each is computed only based on , such that for any ,
where, for two distributions and , the relative entropy .
To use the above result in boosting, we can think of the examples in sample as the set of experts. The learner’s task is thus to choose a distribution over the sample at each round. The loss is defined to be , where is the hypothesis returned by the weak learner. To ensure that the boosting algorithm always generates a “smooth” distribution, we set the feasible set to be the set of all smooth distributions. Below we show how this can be applied in boosting, as suggested by [18].
By the assumption of the weak learner, we have
After rounds, we set the final hypothesis . Let be the set of examples where predicts incorrectly. Suppose . Let , the uniform distribution on and elsewhere. It is easy to see that , since . For each example , we have
since misclassifies . Therefore, . Furthermore, since , we have
By plugging these facts into the inequality in Lemma 1, we get
which implies , a contradiction. ∎
Comments
There are no comments yet.