Efficient Protocols for Distributed Classification and Optimization

04/16/2012 ∙ by Hal Daumé III, et al. ∙ 0

In distributed learning, the goal is to perform a learning task over data distributed across multiple nodes with minimal (expensive) communication. Prior work (Daume III et al., 2012) proposes a general model that bounds the communication required for learning classifiers while allowing for training error on linearly separable data adversarially distributed across nodes. In this work, we develop key improvements and extensions to this basic model. Our first result is a two-party multiplicative-weight-update based protocol that uses O(d^2 1/) words of communication to classify distributed data in arbitrary dimension d, -optimally. This readily extends to classification over k nodes with O(kd^2 1/) words of communication. Our proposed protocol is simple to implement and is considerably more efficient than baselines compared, as demonstrated by our empirical results. In addition, we illustrate general algorithm design paradigms for doing efficient learning over distributed data. We show how to solve fixed-dimensional and high dimensional linear programming efficiently in a distributed setting where constraints may be distributed across nodes. Since many learning problems can be viewed as convex optimization problems where constraints are generated by individual points, this models many typical distributed learning scenarios. Our techniques make use of a novel connection from multipass streaming, as well as adapting the multiplicative-weight-update framework more generally to a distributed setting. As a consequence, our methods extend to the wide range of problems solvable using these techniques.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, distributed learning (learning from data spread across multiple locations) has witnessed a lot of research interest (Bekkerman et al., 2011). One of the major challenges in distributed learning is to minimize communication overhead between different parties, each possessing a disjoint subset of the data. Recent work (Daumé III et al., 2012) has proposed a distributed learning model that seeks to minimize communication by carefully choosing the most informative data points at each node. The authors present a number of general sampling based results as well as a specific two-way protocol that provides a logarithmic bound on communication for the family of linear classifiers in . Most of their results pertain to two players but they propose basic extensions for multi-player scenarios. A distinguishing feature of this model is that it is adversarial. Except linear separability, no distributional or other assumptions are made on the data or how it is distributed across nodes.

In this paper, we develop this model in two substantial ways. First, we extend the results on linear classification to arbitrary dimensions, in the process presenting a more general algorithm that does not rely on explicit geometric constructions. This approach exploits the multiplicative weight update (MWU) framework (specifically its use in boosting) and retains desirable theoretical guarantees – data-size-independent communication between nodes in order to classify data – while being simple to implement. Moreover, it easily extends to -players with an additional communication over the two-player result, which improves the earlier results in two dimensions by a factor of . A second contribution of this work is to demonstrate how general convex optimization problems (for example, linear programming, SDPs and the like) can be solved efficiently in this distributed framework using ideas from both multipass streaming, as well as the well-known multiplicative weight update method. Since many (batch) learning tasks can be reduced to convex optimization problems, this second contribution opens the door to deploying many other learning tasks in the distributed setting with minimal communication.

Outline.

Our main two-party result is proved in Section 4, based on background in Section 2. Using a new sampling protocol for players (Section 3) we extend the two-party result to players in Section 5 and present an empirical study in Section 6. In Section 7 we present our results for distributed optimization.

Related Work.

Existing work in distributed learning mainly focuses on either inferring an accurate global classifier from multiple distributed sub-classifiers learned individually (at respective nodes) or on improving the efficiency of the overall learning protocol. The first line of work consists of techniques like parameter mixing (McDonald et al., 2010; Mann et al., 2009) or averaging (Collins, 2002) and classifier voting (Bauer & Kohavi, 1999). These approaches do admit convergence results but lack any bounds on the communication. Voting, on the other hand, has been shown (Daumé III et al., 2012) to yield suboptimal results on adversarially partitioned datasets. The goal of the second line of work is to make distributed algorithms scale to very large datasets; many of these works (Chu et al., 2007; Teo et al., 2010) depend on MapReduce to extract performance improvement. Dekel et al. (2010) averaged over mini-batches of accumulated gradients to improve regret bounds for distributed online settings. (Zinkevich et al., 2010)

proposed a MapReduce based improved parallel stochastic gradient descent and more recently 

(Servedio & Long, 2011) improved the time complexity of -margin parallel algorithms from to . Finally, (Duchi et al., 2010) and (Agarwal & Duchi, 2011) consider optimization in distributed settings but their convergence analysis applies to specific cases of subgradient and stochastic gradient descent algorithms.

Surprisingly, communication in learning has not been studied as a resource to be used sparingly. And as (Daumé III et al., 2012) and this work demonstrates, intelligent interaction between nodes, communicating relevant aspects of the data, not just its classification, can greatly reduce the necessary communication over existing approaches. On large distributed systems, communication has become a major bottleneck for many real-world problems; it accounts for a large percentage of total energy costs, and is the main reason that MapReduce algorithms are designed to minimize rounds (of communication). This strongly motivates the need to incorporate the study of this aspect of an algorithm directly, as presented and modeled in this paper.

Recently but independently, research by (Balcan et al., 2012) considers very similar models to those of (Daumé III et al., 2012). They also consider adversarially distributed data among parties and attempt to learn on the adversarially distributed data while minimizing the total communication between the parties. Like (Daumé III et al., 2012) the work of (Balcan et al., 2012) presents both agnostic and non-agnostic results for generic settings, and shows improvements over sampling bounds in several specific settings including the -dimensional linear classifier problem we consider here (also drawing inspiration from boosting). In addition, their work provides total communication bounds for decision lists and for proper and non-proper learning of parity functions. They also extend the model so as to preserve differential and distributional privacy while conserving total communication, as a resource, during the learning process.

In contrast, this work identifies optimization as a key primitive underlying many learning tasks, and focuses on solving the underlying optimization problems as a way to provide general communication-friendly distributed learning methods. We introduce techniques that rely on multiplicative weight updates and multi-pass streaming algorithms. Our main contributions are translating these techniques into this distributed setting and using them to solve LPs (and SDPs) in addition to solving for -dimensional linear separators.

2 Background

In this section, we revisit the model proposed in (Daumé III et al., 2012) and mention related results.

Model.

We assume that there are parties . Each party possesses a dataset that no other party has access to, and each may have both positive and negative examples. The goal is to classify the full dataset correctly. We assume that there exists a perfect classifier from a family of classifiers with associated range space and bounded VC-dimension . We are willing to allow -classification error on so that up to points in total are misclassified.

Each word

of data (e.g., a single point or vector in

counts as words) passed between any pair of parties is counted towards the total communication; this measure in words allows us to examine the cost of extending to -dimensions, and allows us to consider communication in forms other than example points, but does not hinder us with precision issues required when counting bits. For instance, a protocol that broadcasts a message of words (say points in ) from one node to the other players costs communication. The goal is to design a protocol with as little communication as possible. We assume an adversarial model of data distribution; in this setting we prepare for the worst, and allow some adversary to determine which player gets which subset of .

Sampling bounds.

Given any dataset and a family of classifiers with bounded VC-dimension , then a random sample of size

(1)

from has at most -classification error on

with constant probability 

(Anthony & Bartlett, 2009), as long as there exists a perfect classifier. Throughout this paper we will assume that a perfect classifier exists. This constant probability of success can be amplified to any with an extra factor of samples.

Randomly partitioned distributions.

Assume that for all , each party has a dataset drawn from the same distribution. That is, all datasets are identically distributed. This case is much simpler than what the remainder of this paper will consider. Using (1), each can be viewed as a sample from the full set , and with no communication each party

can faithfully estimate a classifier with error

.

Henceforth we will focus on adversarially distributed data.

One-way protocols.

Consider a restricted setting where protocols are only able to send data from parties (for ) to ; a restricted form of one-way communication. We can again use (1) so that all parties send a sample of size to , and then constructs a global classifier on with -classification error ; this requires words of communication for points in .

For specific classifiers Daumé III et al. (2012) do better. For thresholds and intervals one can learn a zero-error distributed classifier using constant amount of one-way communication. The same can be achieved for axis-aligned rectangles with

words of communication. However, those authors show that hyperplanes in

, for , require at least one-way bits of communication to learn an -error distributed classifier.

Two-way protocols.

Hereafter, we consider two-way protocols where any two players can communicate back and forth. It has been shown (Daumé III et al., 2012) that, in , a protocol can learn linear classifiers with at most -classification error using at most communication. This protocol is deterministic and relies on a complicated pruning argument, whereby in each round, either an acceptable classifier is found, or a constant fraction more of some party’s data is ensured to be classified correctly.

3 Improved Random Sampling for -players

Our first contribution is an improved two-way -player sampling-based protocol using two-way communication and the sampling result in (1). We designate party as a coordinator, and it gathers the size of each player’s dataset , simulates sampling from each player completely at random, and then reports back to each player the number of samples to be drawn by it, in communication. Then each other party selects random points (in expectation), and sends them to the coordinator. The union of this set satisfies the conditions of the result from (1) over and yields the following result.

Consider any family of hypothesis that has VC-dimension for points in . Then there exists a two-way -player protocol using total words of communication that achieves -classification error, with constant probability.

Again using two-way communication, this type of result can be made even more general. Consider the case where each ’s dataset arrives in a continuous stream; this is what is known as a distributed data stream (Cormode et al., 2008). Then applying results of (Cormode et al., 2010), we can continually maintain a sufficient random sample at the coordinator of size communicating words.

Consider any family of hypothesis that has VC-dimension for points in . Let each of parties have a stream of data points where . Then there exists a two-way -player protocol using total words of communication that maintains -classification error, with constant probability.

4 A Two-Party Protocol

In this section, we consider only two parties, and for notational clarity, we refer to them as and . dataset is labeled and ’s dataset is labeled . Let . Our protocol, summarized in Algorithm 1, is called WeightedSampling. In each round, sends a classifier to and responds back with a set of points , which it constructs by sampling from a weighting on its points. At the end of rounds (for ), we will show that by voting on the result from the set of classifiers will misclassify at most points from while being perfect on , and hence , yielding a -optimal classifier as desired.

There are two ways can construct its points: a random sample and a deterministic sample. For simplicity, we will focus our presentation on the randomized version since it is more practical, although it has slightly worse bounds in the two-party case. Then we will also mention and analyze the deterministic version.

It remains to describe how ’s points are weighted and updated, which dictates how constructs the sample sent to . Initially, they are all given a weight . Then the re-weighting strategy (described in Algorithm 2) is an instance of the multiplicative weight update framework; with each new proposed classifier from , party increases all weights of misclassified points by a factor, and does not change the weight for correctly classified points. We will show is sufficient. Intuitively, this ensures that consistently misclassified points eventually get weighted high enough that they are very likely to be chosen as examples to be communicated in future rounds. The deterministic variant simply replaces Line 7 of Algorithm 2 with the weighted variant (Matousek, 1991) of the deterministic construction of (Chazelle, 2000); see details below.

Note that this is roughly similar in spirit to the heuristic protocol

(Daumé III et al., 2012) that exchanged support points and was called IterativeSupports, which we will experimentally compare against. But the protocol proposed here is less rigid, and as we will demonstrate next, this allows for a much less nuanced analysis.

  Input: , parameters:
  Output: (classifier with -error on )
  Init: ; ;
  for t = 1  do
     ——— A’s move ———
     ;
     ;
     send to ;
     ——— B’s move ———
      := Mwu (, , , ); send to ;
  end for
  ;
Algorithm 1 WeightedSampling
1:  Input: , parameters: ,
2:  Output: (a set of points)
3:  for all (do
4:     if() then ;
5:     if() then ;
6:  end for
7:  randomly sample from (according to );
Algorithm 2 Mwu (, , , )

4.1 Analysis

Our analysis is based on the multiplicative weight update framework (and closely resembles boosting). First, we state a key structural lemma. Thereafter, we use this lemma to prove our main result.

As mentioned above (see (1)), after collecting a random sample of size drawn over the entire dataset , a linear classifier learned on is sufficient to provide -classification error on all of with constant probability. There exist deterministic constructions for these samples still of size (Chazelle, 2000); although they provide at most -classification error with probability , they, in general, run in time exponential in . Note that the VC-dimension of linear classifiers in is , and these results still holds when the points are weighted and the sample is drawn (respectively constructed (Matousek, 1991)) and error measured with respect to this weighting distribution. Thus could send points to , and we would be done; but this is too expensive. We restate this result with a constant , so that at most a fraction of the weights of points are mis-classified (later we show that is sufficient with our framework). Specifically, setting and rephrasing the above results yields the following lemma.

Let have a weighted set of points with weight function . For any constant , party can send a set of size (where the constant depends on ) such that any linear classifier that correctly classifies all points in will misclassify points in with a total weight at most . The set can be constructed deterministically, or a weighted random sample from succeeds with constant probability.

We first state the bound using the deterministic construction of the set , and then extend it to the more practical (from a runtime perspective) random sampling result, but with a slightly worse communication bound.

The deterministic version of two-party two-way protocol WeightedSampling for linear separators in misclassifies at most points after rounds using words of communication.

Proof.

At the start of each round , let be the potential function given by the sum of weights of all points in that round. Initially, since by definition for each point we have .

Then in each round, constructs a classifier at to correctly classify the set of points that accounts for at least fraction of the total weight by Lemma 4.1. All other misclassified points are upweighted by . Hence, for round we have .

Let us consider the weight of the points in the set that have been misclassified by a majority of the classifiers (after the protocol ends). This implies every point in has been misclassified at least number of times and at most number of times. So the minimum weight of points in is and the maximum weight is .

Let be the number of points in that has weight where . The potential function value of after rounds is . Our claim is that . Each of these at most points have a weight of at least . Hence we have that

Relating these two inequalities we obtain the following,

Hence (using )

Setting and we get and thus , as desired since . Thus each round uses points, each requiring words of communication, yielding a total communication of . ∎

In order to use random sampling (as suggested in Algorithm 2), we need to address the probability of failure of our protocol. That is, more specifically the set in Lemma 4.1 is of size and a linear classifier that has no error on misclassifies points in with weight at most , with probability at least .

However, we would like this probability of failure to be a constant over the entire course of the protocol. To guarantee this, we need the -misclassification property to hold in each of rounds. Setting , and applying the union bound implies that then the probability of failure at any point in the protocol is at most . This increases the communication cost of each round to words, with a constant probability of failure. Hence using random sampling as described in WeightedSampling requires a total of words of communication. We formalize below.

The randomized two-party two-way protocol WeightedSampling for linear separators in misclassifies at most points, with constant probability, after rounds using words of communication.

5 -Party Protocol

In Section 3 we described a simple protocol (Theorem 3) to learn a classifier with -error jointly among parties using words of total communication. We now combine this with the two-party protocol from Section 4 to obtain a -player protocol for learning a joint classifier with error .

We fix an arbitrary node (say ) as the coordinator for the -player protocol of Theorem 3. Then runs a version of the two-player protocol (from Section 4) from ’s perspective and where players serve jointly as the second player . To do so, we follow the distributed sampling approach outlined in Theorem 3. Specifically, we fix a parameter (set ). Each other node reports the total weight of their data to , who then reports back to each node what fraction of the total data they own. Then each player sends the coordinator a random sample of size . Recall that we require in this case to account for probability of failure over all rounds. The union of these sets at satisfies the sampling condition in Lemma 4.1 for . computes a classifier on the union of its data and this joint sample and all previous joint samples, and sends the resulting classifier back to all the nodes. Sending this classifier to each party requires words of communication. The process repeats for rounds.

The randomized -party protocol for -error linear separators in terminates in rounds using words of communication, and has a constant probability of failure.

Proof.

The correctness and bound of rounds follows from Theorem 4.1, since, aside from the total weight gathering step, from party ’s perspective it appears to run the protocol with some party where represents parties . The communication for to collect the samples from all parties is . And it takes communication to return to all other players. Hence the total communication over rounds is as claimed. ∎

However, this randomized sampling algorithm required a sample of size , we can achieve a different communication trade-off using the deterministic construction. We can no longer use the result from Theorem 3 since that has a probability of failure. In this case, in each round each party communicates a deterministically constructed set of size , then the coordinator computes a classifier that correctly classifies points from all of these sets, and hence has at most weight of points misclassified in each . The error is at most on each dataset , so the error on all sets is at most . Again using rounds we can achieve the following result.

The deterministic -party protocol for -error linear separators in terminates in rounds using words of communication.

6 Experiments

In this section, we present empirical results, using WeightedSampling, for finding linear classifiers in for two-party and -party scenarios. We empirically compare amongst the following approaches.

  • Naive: a naive approach that sends all data from nodes to a coordinator node and then learns at the coordinator. For any dataset, this accuracy is the best possible.

  • Voting: a simple voting strategy that trains classifiers at each individual node and sends over the classifiers to a coordinator node. For any datapoint, the coordinator node predicts the label by taking a vote over all classifiers.

  • Rand: each of the nodes sends a random sample of size to a coordinator node and then a classifier is learned at the coordinator node using all of its own data and the samples received.

  • RandEmp: a cheaper version of Rand that uses a random sample of size from each party each round; this value was chosen to make this baseline technique as favorable as possible.

  • MaxMarg: IterativeSupports that selects informative points heuristically (Daumé III et al., 2012). A node is chosen as the coordinator and the coordinator exchanges maximum margin support points with each of the nodes. This continues until the training accuracy reaches within of the optimal (i.e., in our case since we assume linearly separable data) or the communication cost equals the total size of the data at non-coordinator nodes (i.e., the cost for Naive).

  • Mwu: WeightedSampling that randomly samples points based on the distribution of the weights and runs for number of rounds (ref. Section 4).

  • MwuEmp: a cheaper version of Mwu with an early stopping condition. The protocol is stopped early if the training accuracy has reached within of the optimal, i.e., .

We do not compare results with Median (Daumé III et al., 2012) as it does not work on datasets beyond two dimensions. For all these methods, SVM (from libSVM (Chang & Lin, 2011) library), with a linear kernel, was used as the underlying classifier. We report training accuracy and communication cost. The training accuracy is computed over the combined dataset with an value of (where applicable). The communication cost (in words) of all methods are reported as ratios with reference to MwuEmp as the base method. All numbers reported are averaged over

runs of the experiments; standard deviations are reported where appropriate. For

Mwu and MwuEmp, we use .

Communication Cost Computation.

In the following, we describe the communication cost computation for each method. Each example point sent from one node to another incurs a communication cost of , since it requires words to describe its position in and word to describe its sign. Similarly, each linear classifier requires words of communication to send; words to describe its direction, and word to describe its offset.

  • Naive: assuming node to be coordinator, the total cost is the number of words sent over by each node to the coordinator and is equal to .

  • Voting: each node sends over its classifier to the coordinator node which incurs a total cost of .

  • Rand: the cost is equal to times some constant where we set the constant to .

  • RandEmp: despite the theoretical cost of (same as Rand), in practice the random sampling based approach performs well with far fewer samples. Starting with a sample size of , we first perform a doubling search to find the range within which RandEmp achieves -optimal accuracy and then do binary search within this range to pick the smallest value for the sample size. Our goal is to pick one value that performs well across all of our datasets. In our case, seems to work well for all the datasets we tested. Thus, in our case, RandEmp incurs a total cost of words.

  • MaxMarg: let denote the support set of node . Assuming node to be coordinator, the total cost in each round is equal to (the number words sent by the coordinator to all nodes plus the number of words sent back by the nodes to the coordinator). The cost accumulates over rounds until the target accuracy is reached or until the cost equals the total size of the data at non-coordinator nodes (i.e., the cost for Naive).

  • Mwu: for our algorithm the cost incurred in each round is words. The first term comes from each player other than the coordinator sending points to the coordinator. The second term accounts for the coordinator replying with a classifier to each of those other players. However, we observe that exchanging a small constant number of samples, instead of , each round works quite well in practice for all of our datasets. For our analysis we had set indicating that is some constant times . But in our experiments, we use a much smaller sample size of per round, with a word cost of per round. The search process to find this smaller sample size is the same as described in RandEmp. The number of rounds for Mwu is .

  • MwuEmp: similar to Mwu, the sample size chosen in and the cost is words times the number of rounds until the early stopping criterion is met.

Note that given our cost computation, for some datasets the cost of Rand, RandEmp and Mwu can exceed the cost of Naive (see, for example, Cancer). For those datasets, the size of the data is small compared to the dimensions. As a result, the communication costs (in number of points) for (a) Rand: , (b) RandEmp: , and (c) Mwu: are large compared to the total size of the data at the non-coordinator nodes (i.e., the cost of Naive).

Datasets.

We report results for two-party and four-party protocols on both synthetic and real-world datasets.

Six datasets, three each for two-party and four-party case, have been generated synthetically from mixture of Gaussians. Each Gaussian has been carefully seeded to generate different data partitions. For Synthetic1, Synthetic2, Synthetic4, Synthetic5, each node contains data points ( positive and negative) whereas for Synthetic3 and Synthetic6, each node contains data points ( positive and negative) and all of these datapoints lie in dimensions. Additionally, we investigate the performance of our protocols on three real-world UCI datasets Frank & Asuncion (2010). Our goal is to select datasets that are linearly separable or almost linearly separable. We choose Cancer and Mushroom from the LibSVM data repository (Chang & Lin, 2011).

The proposed protocol works for perfectly separable datasets. However, this assumption is too idealistic and in practice real-world datasets are seldom perfectly separable either because of presence of noise or due to limitations of linear classifiers (for example, what if the data has a non-linear decision boundary). So most of datasets have some amount of noise in them. This also shows that although our protocols were designed for noiseless data then work well on noisy datasets too. However, when applied on noisy data, we do not guarantee the communication bounds that were claimed for noiseless datasets.

For the datasets that are not perfectly separable, the accuracy of Naive (with some tolerance) that learns an SVM on the entire data can be considered to be the best accuracy that can be achieved for that particular dataset. Table 1 presents a summary of the datasets, the best possible accuracy that can be achieved and also the accuracy required to yield an -optimal classifier with .

Finally, in Tables 2-4, we highlight (in bold) the protocol that achieves the required accuracy and the lowest communication cost and thus is the best among the methods compared. By best we mean that the method has the cheapest communication cost as well an accuracy that is more that times the optimal, i.e., for our case for . As will be frequently seen for Voting, the communication cost is the cheapest but the accuracy is far from the desired -error specified, and in such circumstances we do not deem Voting as the best method.

Dataset total # # of points per player dimensions type perfectly best -optimal
of points 2-player 4-player separable? accuracy accuracy
Synthetic1 10000 5000 - 50 synthetic no 99.23 95.00
Synthetic2 10000 5000 - 50 synthetic no 97.91 95.00
Synthetic3 17000 8500 - 50 synthetic no 97.39 95.00
Synthetic4 20000 - 5000 50 synthetic no 99.26 95.00
Synthetic5 20000 - 5000 50 synthetic no 97.97 95.00
Synthetic6 34000 - 8500 50 synthetic no 97.47 95.00
Cancer 683 342 171 10 real no 97.07 95.00
Mushroom 8124 4062 2031 112 real yes 100 95.00
Table 1: Summary of datasets used ().

6.1 Synthetic Results

Table 2 compares the performance metrics of the aforementioned protocols for two-parties. As can be seen, Voting performs the best for Synthetic1 and RandEmp performs the best for Synthetic2. For Synthetic3, MwuEmp requires the least amount of communication to learn an -optimal distributed classifier. Note that, for Synthetic2 and Synthetic3, both Voting and MaxMarg fail to produce a -optimal () classifier. MaxMarg exhibits this behavior despite incurring a communication cost that is as high as Naive. Note that the cost of MaxMarg being the same as Naive does not imply that MaxMarg send overs all points. Rather the accumulated cost of the support points become the same as the cost of Naive at which point we stop the algorithm. Usually, by this point, the accuracy of MaxMarg saturates and does not improve with exchange of more support points.

Synthetic1 Synthetic2 Synthetic3
Acc Cost Acc Cost Acc Cost
Naive 99.23 (0.0) 49.02 97.91 (0.0) 6.18 97.39 (0.0) 19.08
Voting 95.00 (0.0) 0.01 60.64 (0.0) 0.01 74.55 (0.0) 0.01
Rand 99.02 (0.0) 29.41 97.72 (0.0) 3.71 97.16 (0.0) 6.74
RandEmp 96.64 (0.1) 4.41 95.13 (0.1) 0.56 96.03 (0.1) 1.01
MaxMarg 96.39 (0.0) 4.26 93.76 (0.0) 6.18 73.62 (0.0) 19.08
Mwu 98.66 (0.1) 49.51 97.59 (0.1) 6.24 97.11 (0.1) 11.34
MwuEmp 95.00 (0.0) 1.00 95.17 (0.1) 1.00 95.25 (0.2) 1.00
Table 2: Mean accuracy (Acc) and communication cost (Cost) required by two-party protocols for synthetic datasets.
Synthetic4 Synthetic5 Synthetic6
Acc Cost Acc Cost Acc Cost
Naive 99.26 (0.0) 100.00 97.97 (0.0) 12.72 97.47 (0.0) 54.84
Voting 95.00 (0.0) 0.01 65.83 (0.0) 0.01 75.52 (0.0) 0.01
Rand 99.18 (0.0) 60.00 97.83 (0.0) 7.63 97.39 (0.0) 19.35
RandEmp 97.33 (0.1) 9.00 96.61 (0.1) 1.15 96.67 (0.1) 2.90
MaxMarg 95.95 (0.0) 0.82 93.94 (0.0) 15.15 75.05 (0.0) 80.19
Mwu 98.03 (0.2) 34.78 97.30 (0.1) 4.45 96.87 (0.1) 11.24
MwuEmp 95.11 (0.3) 1.00 95.11 (0.2) 1.00 95.45 (0.2) 1.00
Table 3: Mean accuracy (Acc) and communication cost (Cost) required by four-party protocols for synthetic datasets.

As shown in Table 3, most of the two-party results carry over to the multiparty case. Voting is the best for Synthetic4 whereas MwuEmp is the best for Synthetic5 and Synthetic6. As earlier, both Voting and MaxMarg do not yield an -optimal distributed classifiers for Synthetic5 and Synthetic6.

Figure 1 (for two-party using Synthetic1) shows the communication costs (in log-scale) with variations in the number of data points per node and the dimension of the data. Note that we do not report the numbers for MaxMarg since MaxMarg takes a long time to finish. However, for Synthetic1 the numbers for MaxMarg are similar to those of RandEmp and so their curves in the figure are also the same. Note that in Figure 1(b), the cost of Naive increases as the number of dimensions increase. This is because the cost is multiplied by a factor of , when expressed in words.

(a) Communication cost vs Size
(b) Communication cost vs Dimension
Figure 1: Communication cost vs Size and Dimensionality for Synthetic1 with -party protocol.

6.2 Real-World Data

Table 4 presents results for two-party protocols and four-party protocols using real-world datasets. Other that the two-party case for Mushroom, Voting performs the best in all other case. However, note that for Mushroom using two-party protocol, Voting does not yield a -optimal distributed classifier.

Cancer Mushroom
Acc Cost Acc Cost
2-party
Naive 97.07 (0.0) 3.34 100.00 (0.0) 20.01
Voting 97.36 (0.0) 0.01 88.38 (0.0) 0.00
Rand 97.16 (0.1) 4.52 100.00 (1.1) 36.97
RandEmp 96.90 (0.2) 0.88 100.00 (0.0) 4.97
MaxMarg 96.78 (0.0) 0.22 100.00 (0.0) 1.11
Mwu 97.36 (0.2) 49.51 100.00 (0.0) 24.88
MwuEmp 96.87 (0.4) 1.00 99.73 (0.5) 1.00
4-party
Naive 97.07 (0.0) 1.00 100.00 (0.0) 28.61
Voting 97.36 (0.0) 0.03 95.67 (0.0) 0.01
Rand 97.19 (0.1) 12.81 100.00 (0.6) 105.70
RandEmp 96.99 (0.1) 2.50 99.99 (0.0) 14.20
MaxMarg 96.78 (0.0) 0.56 100.00 (0.0) 2.34
Mwu 97.00 (0.2) 48.46 100.00 (0.1) 24.65
MwuEmp 96.97 (0.3) 1.00 98.86 (0.4) 1.00
Table 4: Mean accuracy (Acc) and communication cost (Cost) required by all protocols for real-world datasets.

The results for communication cost (in log-scale) versus data size and communication cost (in log-scale) versus dimensionality are provided in Figure 2 for two-party protocol using the Mushroom dataset. MwuEmp (denoted by the black line) is comparable to MaxMarg and cheaper than all other baselines (except Voting).

(a) Communication cost vs Size
(b) Communication cost vs Dimension
Figure 2: Communication cost vs Size and Dimensionality for Mushroom with -party protocol.

Remarks.

The goal of our experiments is to show that our protocols perform well, particularly for difficult or adversarially partitioned datasets. For easy datasets, any baseline technique can perform well. Indeed, Voting performs the best on Synthetic1 and Synthetic4 and RandEmp performs better than others on Synthetic2. For the remaining three cases on synthetic datasets, MwuEmp outperforms the other baselines. On real world data, Voting usually performs well. However, as we have shown earlier, for some datasets Voting and MaxMarg fail to yield an -optimal classifier. In particular for Mushroom, using the two-party protocol, the accuracy achieved by Voting is far from -optimal. This and earlier results show that there exists scenarios where Voting and MaxMarg perform particularly worse and so learning by majority voting or by exchanging support points in between nodes is not a good strategy in distributed settings, even more so when the data is partitioned adversarially.

7 Distributed Optimization

Many learning problems can be formulated as convex (or even linear or semidefinite) optimizations (Bennett & Parrado-Hernández, 2006). In these problems, the data (points) act as constraints to the resulting optimization; for example, in a standard SVM formulation, there is one constraint for each point in the training set.

Since in our distributed setting, points are divided among the different players, a natural distributed optimization problem can be stated as follows. Each player has a set of constraints , and the goal is to solve the optimization subject to the union of constraints . As earlier, our goal is to solve the above with minimum communication.

A general solution for communication-efficient distributed convex optimization will allow us to reduce communication overhead for a number of distributed learning problems. In this section, we illustrate two algorithm design paradigms that achieves this for distributed convex optimization.

7.1 Optimization via Multi-Pass Streaming

A streaming algorithm (Muthukrishnan, 2005) takes as input a sequence of items . The algorithm is allowed working space that is sublinear in , and is only allowed to look at each item once as it streams past. A multipass streaming algorithm is one in which the algorithm may make more than one pass over the data, but is still limited to sublinear working space and a single look at each item in each pass.

The following lemma shows how any (multipass) streaming algorithm can be used to build a multiparty distributed protocol.

Suppose that we can solve a given problem using a streaming algorithm that has words of working storage and makes passes over the data. Then there is a -player distributed algorithm for that uses words of communication.

Before proving the above lemma, we note that streaming algorithms often have and , indicating that the total communication is words, which is sublinear in the input size.

Proof.

For ease of exposition, let us first consider the case when . Consider a streaming algorithm satisfying the conditions above. The simulation works by letting the first player simulate the first half of , and letting the second player simulate the second half. Specifically, the first player simulates the behavior of on its input. When this simulation of exhausts the input at , sends over the contents of the working store of to . restarts on its input using this working store as ’s current state. When has finished simulating on its input, it sends the contents of the working storage back to . This completes one pass of , and used words of communication. The process continues for passes.

If there are players instead of two, then we fix an arbitrary ordering of the players. The first player simulates on its input, and at completion passes the contents of the working store to the next one, and so on. Each pass now requires words of communication, and the result follows. ∎

We can apply this lemma to get a streaming algorithm for fixed-dimensional linear programming111Fixed-dimensional linear programming is the case of linear programming where the dimension is not part of the input. Effectively, this means that exponential dependence on the dimension is permitted; the dependence on the number of constraints remains polynomial as usual.. This relies on an existing result (Chan & Chen, 2007):

[(Chan & Chen, 2007)] Given halfspaces in (for constant), we can compute the lowest point in their intersection by a -pass Las Vegas algorithm that uses space and runs in time with high probability, for any constant .

There is a -player algorithm for solving distributed linear programming that uses communication, for any constant .

While the above streaming algorithm can be applied as a blackbox in Corollary 7.1, looking deeper into the streaming algorithm reveals room for improvement. As in the case of classification, suppose that we are permitted to violate an -fraction of the constraints. It turns out that the above streaming algorithm achieves its bounds by eliminating a fixed fraction of constraints in each space, and thus requires passes, where . If we are allowed to violate an -fraction of constraints, we need only run the algorithm for passes, where is now . This allows us to replace in all terms by , resulting in an algorithm with communication independent of .

There is a -player algorithm for solving distributed linear programming that violates at most an -fraction of the constraints, and that uses communication, for any constant .

7.2 Optimization via Multiplicative Weight Updates

The above result gives an approach for solving fixed-dimensional linear programming (exactly or with at most violated constraints) in a distributed setting. There is no known streaming algorithm for arbitrary-dimensional linear programming, so the stream-algorithm-based design strategy cannot be used. However we will now show that the multiplicative weight update method can be applied in a distributed manner, and this allows us to solve general linear programming problems, as well as SDPs and other convex optimizations.

We first consider the problem of solving a general LP of the form , subject to , , where is a set of “soft” constraints (for example, ) and are the “hard” constraints. Let be the optimal value of the LP, obtained at . Then the multiplicative weight update method can be used to obtain a solution such that and all (hard) constraints are satisfied approximately, i.e , , where is one row of the constraint matrix. We call such a solution a soft--approximation (to distinguish it from a traditional approximation in which all constraints would be satisfied exactly and the objective would be approximately achieved.

The standard protocol works as follows (Arora et al., 2005a). We assume that the optimal has been guessed (this can be determined by binary search), and define the set of “soft” constraints to be . Typically, it is easy to check for feasibility in . We define a width parameter . Initialize . Then we run iterations (with ) of the following:

  1. Set .

  2. Find feasible in .

  3. .

At the end, we return as our soft--approximation for the LP.

We now describe a two-party distributed protocol for linear programming adapted from this scheme. The protocol is asymmetric. Player finds feasible values of and player maintains the weights . Specifically, player constructs a feasible set consisting of the original feasible set and all of its own constraints. As above, initializes a weight vector to all zeros, and then sends over the single constraint to . Player then finds a feasible using this constraint as well as (solving a linear program) and then sends the resulting back to , who updates its weight vector .

Each round of communication requires words of information, and there are rounds of communication. Notice that this is exponentially better than merely sending over all constraints.

There is a 2-player distributed protocol that uses words of communication to compute a soft--approximation for a linear program.

A similar result applies for semidefinite programming (based on an existing primal MWU-based SDP algorithm (Arora et al., 2005b)) as well as other optimizations for which the MWU applies, such as rank minimization (Meka et al., 2008), etc.

8 Conclusion

In this work, we have proposed a simple and efficient protocol that learns an -optimal distributed classifier for hyperplanes in arbitrary dimensions. The protocol also gracefully extends to -players. Our proposed technique WeightedSampling relates to the MWU-based meta framework and we exploit this connection to extend WeightedSampling for distributed convex optimization problems. This makes our protocol applicable to a wide variety of distributed learning problems that can be formulated as an optimization task over multiple distributed nodes.

References

  • Agarwal & Duchi (2011) Agarwal, Alekh and Duchi, John. Distributed delayed stochastic optimization. In NIPS. 2011.
  • Anthony & Bartlett (2009) Anthony, Martin and Bartlett, Peter L. Neural Network Learning: Theoretical Foundations. Cambridge, 2009.
  • Arora et al. (2005a) Arora, Sanjeev, Hazan, Elad, and Kale, Satyen. The multiplicative weights update method: a meta algorithm and applications. Technical report, 2005a.
  • Arora et al. (2005b) Arora, Sanjeev, Hazan, Elad, and Kale, Satyen. Fast algorithms for approximate semidefinite programming using the multiplicative weights update method. In FOCS, 2005b.
  • Balcan et al. (2012) Balcan, Maria Florina, Blum, Avrim, Fine, Shai, and Mansour, Yishay. Distributed learning, communication complexity and privacy. Personal communication, February 2012.
  • Bauer & Kohavi (1999) Bauer, Eric and Kohavi, Ron. An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning, 36(1-2), 1999.
  • Bekkerman et al. (2011) Bekkerman, Ron, Bilenko, Mikhail, and Langford, John. Scaling up machine learning: Parallel and distributed approaches, 2011.
  • Bennett & Parrado-Hernández (2006) Bennett, Kristin P. and Parrado-Hernández, Emilio. The interplay of optimization and machine learning research. J. Mach. Learn. Res., 7:1265–1281, December 2006.
  • Chan & Chen (2007) Chan, Timothy M. and Chen, Eric Y. Multi-pass geometric algorithms. Disc. & Comp. Geom., 37(1):79–102, 2007.
  • Chang & Lin (2011) Chang, Chih Chung and Lin, Chih Jen.

    LIBSVM: A library for support vector machines.

    ACM TIST, 2(3), 2011.
  • Chazelle (2000) Chazelle, Bernard. The Discrepancy Method. Cambridge, 2000.
  • Chu et al. (2007) Chu, Cheng Tao, Kim, Sang Kyun, Lin, Yi An, Yu, YuanYuan, Bradski, Gary, Ng, Andrew Y., and Olukotun, Kunle. Map-reduce for machine learning on multicore. In NIPS. 2007.
  • Collins (2002) Collins, Michael.

    Discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms.

    In EMNLP, 2002.
  • Cormode et al. (2008) Cormode, Graham, Muthukrishnan, S., and Yi, Ke. Algorithms for distributed functional monitoring. In SODA, 2008.
  • Cormode et al. (2010) Cormode, Graham, Muthukrishnan, S., Yi, Ke, and Zhang, Qin. Optimal sampling from distributed streams. In PODS, 2010.
  • Daumé III et al. (2012) Daumé III, Hal, Phillips, Jeff, Saha, Avishek, and Venkatasubramanian, Suresh. Protocols for learning classifiers on distributed data. In AISTATS (To appear), 2012.
  • Dekel et al. (2010) Dekel, Ofer, Gilad-Bachrach, Ran, Shamir, Ohad, and Xiao, Lin. Optimal distributed online prediction using mini-batches. CoRR, abs/1012.1367, 2010.
  • Duchi et al. (2010) Duchi, John, Agarwal, Alekh, and Wainwright, Martin. Distributed dual averaging in networks. In NIPS. 2010.
  • Frank & Asuncion (2010) Frank, A. and Asuncion, A. UCI machine learning repository, 2010. URL http://archive.ics.uci.edu/ml.
  • Mann et al. (2009) Mann, Gideon, McDonald, Ryan, Mohri, Mehryar, Silberman, Nathan, and Walker, Dan. Efficient large-scale distributed training of conditional maximum entropy models. In NIPS, 2009.
  • Matousek (1991) Matousek, Jiri. Approximations and optimal geometric divide-and-conquer. In STOC, 1991.
  • McDonald et al. (2010) McDonald, Ryan, Hall, Keith, and Mann, Gideon. Distributed training strategies for the structured perceptron. In NAACL HLT, 2010.
  • Meka et al. (2008) Meka, Raghu, Jain, Prateek, Caramanis, Constantine, and Dhillon, Inderjit S. Rank minimization via online learning. In ICML, 2008.
  • Muthukrishnan (2005) Muthukrishnan, S. Data streams: algorithms and applications. Foundations and trends in theoretical computer science. Now Publishers, 2005.
  • Servedio & Long (2011) Servedio, Rocco A. and Long, Phil. Algorithms and hardness results for parallel large margin learning. In NIPS, 2011.
  • Teo et al. (2010) Teo, Choon Hui, Vishwanthan, S.V.N., Smola, Alex J., and Le, Quoc V. Bundle methods for regularized risk minimization. J. Mach. Learn. Res., 11:311–365, March 2010.
  • Zinkevich et al. (2010) Zinkevich, Martin, Weimer, Markus, Smola, Alex, and Li, Lihong. Parallelized stochastic gradient descent. In NIPS, 2010.