A Parallelizable Acceleration Framework for Packing Linear Programs

11/17/2017 ∙ by Palma London, et al. ∙ The Chinese University of Hong Kong California Institute of Technology 0

This paper presents an acceleration framework for packing linear programming problems where the amount of data available is limited, i.e., where the number of constraints m is small compared to the variable dimension n. The framework can be used as a black box to speed up linear programming solvers dramatically, by two orders of magnitude in our experiments. We present worst-case guarantees on the quality of the solution and the speedup provided by the algorithm, showing that the framework provides an approximately optimal solution while running the original solver on a much smaller problem. The framework can be used to accelerate exact solvers, approximate solvers, and parallel/distributed solvers. Further, it can be used for both linear programs and integer linear programs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This paper proposes a black-box framework that can be used to accelerate both exact and approximate linear programming (LP) solvers for packing problems while maintaining high quality solutions.

LP solvers are at the core of many learning and inference problems, and often the linear programs of interest fall into the category of packing problems. Packing problems are linear programs of the following form:

(1a)
subject to (1b)
(1c)

where , , .

Packing problems arise in a wide variety of settings, including max cut [Trevisan1998], zero-sum matrix games [Nesterov2005], scheduling and graph embedding [Plotkin, Shmoys, and Tardos1995], flow controls [Bartal, Byers, and Raz2004], auction mechanisms [Zurel and Nisan2001], wireless sensor networks [Byers and Nasser2000]

, and many other areas. In machine learning specifically, they show up in an array of problems, e.g., in applications of graphical models

[Ravikumar, Agarwal, and Wainwright2010], associative Markov networks [Taskar, Chatalbashev, and Koller2004]

, and in relaxations of maximum a posteriori (MAP) estimation problems

[Sanghavi, Malioutov, and Willsky2008], among others.

In all these settings, practical applications require LP solvers to work at extreme scales and, despite decades of work, commercial solvers such as Cplex and Gurobi do not scale as desired in many cases. Thus, despite a large literature, the development of fast, parallelizable solvers for packing LPs is still an active direction.

Our focus in this paper is on a specific class of packing LPs for which data is either very costly, or hard to obtain. In these situations ; i.e., the number of data points available is much smaller than the number of variables, . Such instances are common in areas such as genetics, astronomy, and chemistry. There has been considerable research focusing on this class of problems in recent years, in the context of LPs [Donoho and Tanner2005, Bienstock and Iyengar2006] and also more generally in convex optimization and compressed sensing [Candes, Romberg, and Tao2006, Donoho2006], low rank matrix recovery [Recht, Fazel, and Parrilo2010, Candes and Plan2011], and graphical models [Yuan and Lin2007a, Mohan et al.2014].

Contributions of this paper.

We present a black-box acceleration framework for LP solvers. When given a packing LP and an algorithm , the framework works by sampling an -fraction of the variables and using to solve LP restricted to these variables. Then, the dual solution to this sampled LP is used to define a thresholding rule for the primal variables of the original LP; the variables are set to either or according to this rule. The framework has the following key properties:

  1. It can be used to accelerate exact or approximate LP-solvers (subject to some mild assumptions which we discuss below).

  2. Since the original algorithm is run only on a (much smaller) LP with -fraction of the variables, the framework provides a dramatic speedup.

  3. The threshold rule can be used to set the values of the variables in parallel. Therefore, if is a parallel algorithm, the framework gives a faster parallel algorithm with negligible overhead.

  4. Since the threshold rule sets the variables to integral values, the framework can be applied without modification to solve integer programs that have the same structure as LP, but with integer constraints replacing (1c).

There are two fundamental tradeoffs in the framework. The first is captured by the sample size, . Setting to be small yields a dramatic speedup of the algorithm ; however, if is set too small the quality of the solution suffers. A second tradeoff involves feasibility. In order to ensure that the output of the framework is feasible w.h.p. (and not just that each constraint is satisfied in expectation), the constraints of the sample LP are scaled down by a factor denoted by . Feasibility is guaranteed if is large enough; however, if it is too large, the quality of the solution (as measured by the approximation ratio) suffers.

Our main technical result is a worst-case characterization of the impact of and on the speedup provided by the framework and the quality of the solution. Assuming that algorithm gives a approximation to the optimal solution of the dual, we prove that the acceleration framework guarantees a -approximation to the optimal solution of LP, under some assumptions about the input and . We formally state the result as Theorem 3.1, and note here that the result shows that grows proportionally to , which highlights that the framework maintains a high-quality approximation even when sample size is small (and thus the speedup provided by the framework is large).

The technical requirements for in Theorem 3.1 impose some restrictions on both the family of LPs that can be provably solved using our framework and the algorithms that can be accelerated. In particular, Theorem 3.1 requires to be large and the algorithm to satisfy approximate complementary slackness conditions (see Section 2). While the condition on the is restrictive, the condition on the algorithms is not – it is satisfied by most common LP solvers, e.g., exact solvers and many primal dual approximation algorithms. Further, our experimental results demonstrate that these technical requirements are conservative – the framework produces solutions of comparable quality to the original LP-solver in settings that are far from satisfying the theoretical requirements. In addition, the accelerator works in practice for algorithms that do not satisfy approximate complementary slackness conditions, e.g., for gradient algorithms as in [Sridhar et al.2013]. In particular, our experimental results show that the accelerator obtains solutions that are close in quality to those obtained by the algorithms being accelerated on the complete problem, and that the solutions are obtained considerably faster (by up to two orders of magnitude). The results reported in this paper demonstrate this by accelerating the state-of-the-art commercial solver Gurobi on a wide array of randomly generated packing LPs and obtaining solutions with relative error and a more than speedup. Other experiments with other solvers are qualitatively similar and are not included.

When applied to parallel algorithms, there are added opportunities for the framework to reduce error while increasing the speedup, through speculative execution: the framework runs multiple clones of the algorithm speculatively. The original algorithm is executed on a separate sample and the thresholding rule is then applied by each clone in parallel, asynchronously. This improves both the solution quality and the speed. It improves the quality of the solution because the best solution across the multiple samples can be chosen. It improves the speed because it mitigates the impact of stragglers, tasks that take much longer than expected due to contention or other issues. Incorporating “cloning” into the acceleration framework triples the speedup obtained, while reducing the error by .

Summary of related literature.

The approach underlying our framework is motivated by recent work that uses ideas from online algorithms to make offline algorithms more scalable, e.g., [Mansour et al.2012, London et al.2017]. A specific inspiration for this work is [Agrawal, Wang, and Ye2014], which introduces an online algorithm that uses a two step procedure: it solves an LP based on the first stages and then uses the solution as the basis of a rounding scheme in later stages. The algorithm only works when the arrival order is random, which is analogous to sampling in the offline setting. However, [Agrawal, Wang, and Ye2014] relies on exactly solving the LP given by the first stages; considering approximate solutions of the sampled problem (as we do) adds complexity to the algorithm and analysis. Additionally, we can leverage the offline setting to fine-tune in order to optimize our solution while ensuring feasibility.

The sampling phase of our framework is reminiscent of the method of sketching

in which the data matrix is multiplied by a random matrix in order to compress the problem and thus reduce computation time by working on a smaller formulation, e.g., see

[Woodruff2014]

. However, sketching is designed for overdetermined linear regression problems,

; thus compression is desirable. In our case, we are concerned with underdetermined problems, ; thus compression is not appropriate. Rather, the goal of sampling the variables is to be able to approximately determine the thresholds in the second step of the framework. This difference means the approaches are distinct.

The sampling phase of the framework is also reminiscent of the experiment design problem, in which the goal is to solve the least squares problem using only a subset of available data while minimizing the error covariance of the estimated parameters, see e.g., [Boyd and Vandenberghe2004]. Recent work [Riquelme, Johari, and Zhang2017] applies these ideas to online algorithms, when collecting data for regression modeling. Like sketching, experiment design is applied in the overdetermined setting, whereas we consider the under-determined scenario. Additionally, instead of sampling constraints, we sample variables.

The second stage of our algorithm is a thresholding step and is related to the rich literature of LP rounding, see [Bertsimas and Vohra1998] for a survey. Typically, rounding is used to arrive at a solution to an ILP; however we use thresholding to “extend” the solution of a sampled LP to the full LP. The scheme we use is a deterministic threshold based on the complementary slackness condition. It is inspired by [Agrawal, Wang, and Ye2014], but adapted to hold for approximate solvers rather than exact solvers. In this sense, the most related recent work is [Sridhar et al.2013], which proposes a scheme for rounding an approximate LP solution. However, [Sridhar et al.2013] uses all of the problem data during the approximation step, whereas we show that it is enough to use a (small) sample of the data.

A key feature of our framework is that it can be parallelized easily when used to accelerate a distributed or parallel algorithm. There is a rich literature on distributed and parallel LP solvers, e.g., [Yarmish and Slyke2009, Notarstefano and Bullo2011, Burger et al.2012, Richert and Cortés2015]. More specifically, there is significant interest in distributed strategies for approximately solving covering and packing linear problems, such as the problems we consider here, e.g., [Luby and Nisan1993, Young2001, Bartal, Byers, and Raz2004, Awerbuch and Khandekar2008, Allen-Zhu and Orecchia2015].

2 A Black-Box Acceleration Framework

In this section we formally introduce our acceleration framework. At a high level, the framework accelerates an LP solver by running the solver in a black-box manner on a small sample of variables and then using a deterministic thresholding scheme to set the variables in the original LP. The framework can be used to accelerate any LP solver that satisfies the approximate complementary slackness conditions. The solution of an approximation algorithm for a family of linear programs satisfies the approximate complementary slackness if the following holds. Let be a feasible solution to the primal and be a feasible solution to the dual.

  • Primal Approximate Complementary Slackness: For and , if then

  • Dual Approximate Complementary Slackness: For and , if then .

We call an algorithm whose solution is guaranteed to satisfy the above conditions an -approximation algorithm for . This terminology is non-standard, but is instructive when describing our results. It stems from a foundational result which states that an algorithm that satisfies the above conditions is an -approximation algorithm for any LP in for  [Buchbinder and Naor2009].

The framework we present can be used to accelerate any -approximation algorithm. While this is a stronger condition than simply requiring that is an -approximation algorithm, many common dual ascent algorithms satisfy this condition, e.g., [Agrawal, Klein, and Ravi1995, Balakrishnan, Magnanti, and Wong1989, Bar-Yehuda and Even1981, Erlenkotter1978, Goemans and Williamson1995]. For example, the vertex cover and Steiner tree approximation algorithms of  [Agrawal, Klein, and Ravi1995] and [Bar-Yehuda and Even1981] respectively are both -approximation algorithms.

Given a -approximation algorithm , the acceleration framework works in two steps. The first step is to sample a subset of the variables, , , and use to solve the following sample LP, which we call LP (2). For clarity, we relabel the variables so that the sampled variables are labeled .

(2a)
subject to (2b)
(2c)

Here, is the parameter of the dual approximate complementary slackness guarantee of , is a parameter set to ensure feasibility during the thresholding step, and is a parameter that determines the fraction of the primal variables that are be sampled. Our analytic results give insight for setting and but, for now, both should be thought of as close to zero. Similarly, while the results hold for any , they are most interesting when is close to (i.e., for small ). There are many such algorithms, given the recent interest in designing approximation algorithms for LPs, e.g., [Sridhar et al.2013, Allen-Zhu and Orecchia2015].

The second step in our acceleration framework uses the dual prices from the sample LP in order to set a threshold for a deterministic thresholding procedure, which is used to build the solution of LP. Specifically, let and denote the dual variables corresponding to the constraints (2b) and  (2c) in the sample LP, respectively. We define the allocation (thresholding) rule as follows:

Input: Packing LP , LP solver , ,
Output:
  1. Select primal variables uniformly at random. Label this set .

  2. Use to find an (approximate) dual solution to the sample LP.

  3. Set for all .

  4. Return .

Algorithm 1 Core acceleration algorithm

We summarize the core algorithm of the acceleration framework described above in Algorithm 1. When implementing the acceleration framework it is desirable to search for the minimal that allows for feasibility. This additional step is included in the full pseudocode of the acceleration framework given in Algorithm 2.

Input: Packing LP , LP solver , ,
Output:
Set . while  do
       = Algorithm 1(,,,). if  is a feasible solution to  then
            Return .
      else
            Increase .
      
Algorithm 2 Pseudocode for the full framework.

It is useful to make a few remarks about the generality of this framework. First, since the allocation rule functions as a thresholding rule, the final solution output by the accelerator is integral. Thus, it can be viewed as an ILP solver based on relaxing the ILP to an LP, solving the LP, and rounding the result. The difference is that it does not solve the full LP, but only a (much smaller) sample LP; so it provides a significant speedup over traditional approaches. Second, the framework is easily parallelizable. The thresholding step can be done independently and asynchronously for each variable and, further, the framework can easily integrate speculative execution. Specifically, the framework can start multiple clones speculatively, i.e., take multiple samples of variables, run the algorithm on each sample, and then round each sample in parallel. This provides two benefits. First, it improves the quality of the solution because the output of the “clone” with the best solution can be chosen. Second, it improves the running time since it curbs the impact of stragglers, tasks that take much longer than expected due to contention or other issues. Stragglers are a significant source of slowdown in clusters, e.g., nearly one-fifth of all tasks can be categorized as stragglers in Facebook’s Hadoop cluster [Ananthanarayanan et al.2013]. There has been considerable work designing systems that reduce the impact of stragglers, and these primarily rely on speculative execution, i.e., running multiple clones of tasks and choosing the first to complete [Ananthanarayanan et al.2010, Ananthanarayanan et al.2014, Ren et al.2015]. Running multiple clones in our acceleration framework has the same benefit. To achieve both the improvement in solution quality and running time, the framework runs clones in parallel and chooses the best solution of the first to complete. We illustrate the benefit of this approach in our experimental results in Section 3.

3 Results

In this section we present our main technical result, a worst-case characterization of the quality of the solution provided by our acceleration framework. We then illustrate the speedup provided by the framework through experiments using Gurobi, a state-of-the-art commercial solver.

3.1 A Worst-case Performance Bound

The following theorem bounds the quality of the solution provided by the acceleration framework. Let be a packing LP with variables and constraints, as in (1), and . For simplicity, take to be integral.

Theorem 3.1.

Let be an -approximation algorithm for packing LPs, with runtime . For any and , Algorithm 1 runs in time and obtains a feasible -approximation to the optimal solution for

with probability at least

.

Proof.

The approximation ratio follows from Lemmas 4.2 and 4.7 in Section 4, with a rescaling of by 1/3 in order to simplify the theorem statement. The runtime follows from the fact that is executed on an LP with variables and at most constraints and that, after running , the thresholding step is used to set the value for all variables. ∎

The key trade-off in the acceleration framework is between the size of the sample LP, determined by , and the resulting quality of the solution, determined by the feasibility parameter, . The accelerator provides a large speedup if can be made small without causing to be too large. Theorem 3.1 quantifies this trade-off: grows as . Thus, can be kept small without impacting the loss in solution quality too much. The bound on in the theorem also defines the class of problems for which the accelerator is guaranteed to perform well—problems where and is not too small. Nevertheless, our experimental results successfully apply the framework well outside of these parameters—the theoretical analysis provides a very conservative view on the applicability of the framework.

Theorem 3.1 considers the acceleration of -approximation algorithms. As we have already noted, many common approximation algorithms fall into this class. Further, any exact solver satisfies this condition. For exact solvers, Theorem 3.1 guarantees a -approximation (since ).

In addition to exact and approximate LP solvers, our framework can also be used to convert LP solvers into ILP solvers, since the solutions it provides are always integral; and it can be parallelized easily, since the thresholding step can be done in parallel. We emphasize these points below.

Corollary 3.2.

Let be an -approximation algorithm for packing LPs, with runtime . Consider , and .

  • Let be an integer program similar to LP (1) but with integrality constraints on the variables. Running Algorithm 1 on LP (1) obtains a feasible -approximation to the optimal solution for with probability at least with runtime .

  • If is a parallel algorithm, then executing Algorithm 1 on processors in parallel obtains a feasible -approximation to the optimal solution for or with probability at least and runtime , where denotes ’s runtime for the sample program on processors.

3.2 Accelerating Gurobi

We illustrate the speedup provided by our acceleration framework by using it to accelerate Gurobi, a state-of-the-art commercial solver. Due to limited space, we do not present results applying the accelerator to other, more specialized, LP solvers; however the improvements shown here provide a conservative estimate of the improvements using parallel implementations since the thresholding phase of the framework has a linear speedup when parallelized. Similarly, the speedup provided by an exact solver (such as Gurobi) provides a conservative estimate of the improvements when applied to approximate solvers or when applied to solve ILPs.

Note that our experiments consider situations where the assumptions of Theorem 3.1 about , , and do not hold. Thus, they highlight that the assumptions of the theorem are conservative and the accelerator can perform well outside of the settings prescribed by Theorem 3.1. This is also true with respect to the assumptions on the algorithm being accelerated. While our proof requires the algorithm to be a -approximation, the accelerator works well for other types of algorithms too. For example, we have applied it to gradient algorithm such as [Sridhar et al.2013] with results that parallel those presented for Gurobi below.

Experimental Setup.

To illustrate the performance of our accelerator, we run Algorithm 2 on randomly generated LPs. Unless otherwise specified, the experiments use a matrix of size . Each element of , denoted as , is first generated from uniformly at random and then set to zero with probability . Hence, controls the sparsity of matrix , and we vary

in the experiments. The vector

is drawn i.i.d. from uniformly. Each element of the vector is fixed as . (Note that the results are qualitatively the same for other choices of .) By default, the parameters of the accelerator are set as and , though these are varied in some experiments. Each point in the presented figures is the average of over 100 executions under different realizations of .

To assess the quality of the solution, we measure the relative error and speedup of the accelerated algorithm as compared to the original algorithm. The relative error is defined as , where is the objective value produced by our algorithm and is the optimal objective value. The speedup is defined as the run time of the original LP solver divided by that of our algorithm.

We implement the accelerator in Matlab and use it to accelerate Gurobi. The experiments are run on a server with Intel E5-2623V3@3.0GHz 8 cores and 64GB RAM. We intentionally perform the experiments with a small degree of parallelism in order to obtain a conservative estimate of the acceleration provided by our framework. As the degree of parallelism increases, the speedup of the accelerator increases and the quality of the solution remains unchanged (unless cloning is used, in which case it improves).

Experimental Results.

Our experimental results highlight that our acceleration framework provides speedups of two orders of magnitude (over ), while maintaining high-quality solutions (relative errors of %).

The trade-off between relative error and speed. The fundamental trade-off in the design of the accelerator is between the sample size, , and the quality of the solution. The speedup of the framework comes from choosing small, but if it is chosen too small then the quality of the solution suffers. For the algorithm to provide improvements in practice, it is important for there to be a sweet spot where is small and the quality of the solution is still good, as indicated in the shaded region of Figure 1.

(a)
(b)
Figure 1: Illustration of the relative error and speedup across sample sizes, . Two levels of sparsity, , are shown.

Scalability. In addition to speeding up LP solvers, our acceleration framework provides significantly improved scalability. Because the LP solver only needs to be run on a (small) sample LP, rather than the full LP, the accelerator provides order of magnitude increase in the size of problems that can be solved. This is illustrated in Figure 2. The figure shows the runtime and relative error of the accelerator. In these experiments we have fixed and as we scale . We have set throughout. As (a) shows, one can choose more aggressively in large problems since leaving fixed leads to improved accuracy for large scale problems. Doing this would lead to larger speedups; thus by keeping fixed we provide a conservative estimate of the improved scalability provided by the accelerator. The results in (b) illustrate the improvements in scalability provided by the accelerator. Gurobi’s run time grows quickly until finally, it runs into memory errors and cannot arrive at a solution. In contrast, the runtime of the accelerator grows slowly and can (approximately) solve problems of much larger size. To emphasize the improvement in scalability, we run an experiment on a laptop with Intel Core i5 CPU and 8 GB RAM. For a problem with size , Gurobi fails due to memory limits. In contrast, the accelerator produces a solution in minutes with relative error less than .

(a) Relative Error
(b) Runtime
Figure 2: Illustration of the relative error and runtime as the problem size, , grows.

The benefits of cloning. Speculative execution is an important tool that parallel analytics frameworks use to combat the impact of stragglers. Our acceleration framework can implement speculative execution seamlessly by running multiple clones (samples) in parallel and choosing the ones that finish the quickest. We illustrate the benefits associated with cloning in Figure 3. This figure shows the percentage gain in relative error and speedup associated with using different numbers of clones. In these experiments, we fix and . We vary the number of clones run and the accelerator outputs a solution after the fastest four clones have finished. Note that the first four clones do not impact the speedup as long as they can be run in parallel. However, for larger numbers of clones our experiments provide a conservative estimate of the value of cloning since our server only has 8 cores. The improvements would be larger than shown in Figure 3 in a system with more parallelism. Despite this conservative comparison, the improvements illustrated in Figure 3 are dramatic. Cloning reduces the relative error of the solution by and triples the speedup. Note that these improvements are significant even though the solver we are accelerating is not a parallel solver.

(a) Relative Error
(b) Speedup
Figure 3: Illustration of the impact of cloning on solution quality as the number of clones grows.

3.3 Case Study

To illustrate the performance in a specific practical setting, we consider an example focused on optimal resource allocation in a network. We consider an LP that represents a multi-constraint knapsack problem associated with placing resources at intersections in a city transportation network. For example, we can place public restrooms, advertisements, or emergency supplies at intersections in order to maximize social welfare, but such that there never is a particularly high concentration of resources in any area.

Specifically, we consider a subset of the California road network dataset [Leskovec and Krevl2014], consisting of connected traffic intersections. We consider only a subset of a total of intersections because Gurobi is unable to handle such a large dataset when run on a laptop with Intel Core i5 CPU and 8 GB RAM. We choose of the intersections uniformly at random and defined for each of them a local vicinity of

neighboring intersections, allowing overlap between the vicinities. The goal is to place resources strategically at intersections, such that the allocation is not too dense within each local vicinity. Each intersection is associated with a binary variable which represents a yes or no decision to place resources there. Resources are constrained such that the sum of the number of resource units placed in each local vicinity does not exceed

.

Thus, the dataset is as follows. Each element in the data matrix is a binary value representing whether or not the -th intersection is part of the -th local vicinity. There are local vicinities and intersections, hence is a matrix. Within each local vicinity, there are no more than resource units.

The placement of resources at particular locations has an associated utility, which is a quantifier of how beneficial it is to place resources at various locations. For example, the benefit of placing public restrooms, advertisements, or emergency supplies at certain locations may be proportional to the population of the surrounding area. In this problem, we randomly draw the utilities from Unif. The objective value is the sum of the utilities at whose associated nodes resources are placed.

Figure 4 demonstrates the relative error and runtime of the accelerator compared to Gurobi, as we vary the sample size . There is a speed up by a factor of more than when the approximation ratio is , or a speed up by a factor of about when the approximation ratio is .

(a) Relative Error
(b) Runtime
Figure 4: Illustration of the relative error and runtime across sample sizes, , for the real data experiment on the California road network dataset.

4 Proofs

In this section we present the technical lemmas used to prove Theorem 3.1. The approach of the proof is inspired by the techniques in  [Agrawal, Wang, and Ye2014]; however the analysis in our case is more involved. This is due to the fact that our result applies to approximate LP solvers while the techniques in [Agrawal, Wang, and Ye2014] only apply to exact solvers. For example, this leads our framework to have three error parameters (, ) while [Agrawal, Wang, and Ye2014] has a single error parameter.

The proof has two main steps: (1) show that the solution provided by Algorithm 1 is feasible with high probability (Lemma 4.2); and (2) show that the value of the solution is sufficiently close to optimal with high probability (Lemma 4.7). In both cases, we use the following concentration bound, e.g., [van der Vaart and Wellner1996].

Theorem 4.1 (Hoeffding-Bernstein Inequality).

Let be random samples without replacement from the real numbers , where . For , where .

Step 1: The solution is feasible

Lemma 4.2.

Let be a -approximation algorithm for packing LPs, . For any , , the solution Algorithm 1 gives to LP is feasible with probability at least , where the probability is over the choice of samples.

Proof.

Define a price-realization, , of a price vector as the set (note that ) and denote, a “row” of as . We say that is infeasible if . The approach of this proof is to bound the probability that, for a given sample, the sample LP is feasible while there is some for which is not feasible in the original LP.

To begin, note that it naively seems that there are possible realizations of , over all possible price vectors , as . However, a classical result of combinatorial geometry [Orlik and Terao1992] shows that there are only possible realizations since each is characterized by a separation of points in an

-dimensional plane by a hyperplane, where

denotes the -th column of . The maximal number of such hyperplanes is .

Next, we define a sample , as -good if . Let be the solution to the sample LP for some sample . We say that (possibly ) is -good if . The following claim relates these two definitions.

Claim 4.3.

If a sample is -good then is -good.

Proof.

Denote the dual solution of the sample LP by . The dual complementary slackness conditions imply that, if then for . Further, the allocation rule of Algorithm 1 sets if , which only occurs when . Therefore, if , it implies that , which in turn implies that . Consequently,

which shows that is -good, completing the proof. ∎

Next, fix the LP and . For the purpose of the proof, choose uniformly at random. Next, we sample elements without replacement from variables taking the values . Call this sample . Let

be the random variable denoting the sum of these random variables. Note that

, where the expectation is over the choice of , and that the events and are equivalent. The probability that a sample is -good and is infeasible is

(3)
(4)
(5)

where (3) is due to Claim 4.3, (4) uses Theorem 4.1, and (5) uses the fact that .

To complete the proof, we now take a union bound over all possible realizations of , which we bounded earlier by , and values of . ∎

Step 2: The solution is close to optimal

To prove that the solution is close to optimal we make two mild, technical assumptions.

Assumption 4.4.

For any dual prices , there are at most columns such that .

Assumption 4.5.

Algorithm maintains primal and dual solutions and respectively with only if .

Assumption 4.4 does not always hold; however it can be enforced by perturbing each by a small amount at random (see, e.g., [Devanur and Hayes2009, Agrawal, Wang, and Ye2014]). Assumption 4.5 holds for any “reasonable” -approximation dual ascent algorithm, and any algorithm that does not satisfy it can easily be modified to do so. These assumptions are used only to prove the following claim, which is used in the proof of the lemma that follows.

Claim 4.6.

Let and be solutions of to the sampled LP (2). Then and differ on at most values of .

Proof.

For all , if then by primary complementary slackness, and by definition of the allocation rule (recall that if then , by Assumption 4.5). Therefore any difference between them must occur for such that . For all such that , it must hold that by complementary slackness, but then also by the allocation rule. Assumption 4.4 then completes the proof. ∎

Lemma 4.7.

Let be a -approximation algorithm for packing LPs, . For any , , the solution Algorithm 1 gives to LP (1) is a -approximation to the optimal solution with probability at least , where the probability is over the choice of samples.

Proof.

Denote the primal and dual solutions to the sampled LP in (2) of Algorithm 1 by , . For purposes of the proof, we construct the following related LP.

(6)
subject to

where

Note that has been set to guarantee that the LP is always feasible, and that and satisfy the (exact) complementary slackness conditions, where if , and if . In particular, note that preserves the exact complementary slackness condition, as is set to zero when . Therefore and are optimal solutions to LP (6).

A consequence of the approximate dual complementary slackness condition for the solution is that the -th primal constraint of LP (2) is almost tight when :

This allows us to bound as follows.

where the first inequality follows from Claim 4.6 and the second follows from the fact that . Thus:

In the final step, we take close to one, i.e., we assume . The constant in the lemma can be adjusted if application for larger is desired.

Applying the union bound gives that, with probability at least , it holds that . It follows that, if is an optimal solution to , then is a feasible solution to LP (6). Thus, the optimal value of LP (6) is at least . ∎

References

  • [Agrawal, Klein, and Ravi1995] Agrawal, A.; Klein, P.; and Ravi, R. 1995. When trees collide: An approximation algorithm for the generalized steiner problem on networks. SIAM J. on Comp. 24(3):440–456.
  • [Agrawal, Wang, and Ye2014] Agrawal, S.; Wang, Z.; and Ye, Y. 2014. A dynamic near-optimal algorithm for online linear programming. Oper. Res. 62(4):876–890.
  • [Allen-Zhu and Orecchia2015] Allen-Zhu, Z., and Orecchia, L. 2015. Using optimization to break the epsilon barrier: A faster and simpler width-independent algorithm for solving positive linear programs in parallel. In Proc. of SODA, 1439–1456.
  • [Ananthanarayanan et al.2010] Ananthanarayanan, G.; Kandula, S.; Greenberg, A. G.; Stoica, I.; Lu, Y.; Saha, B.; and Harris, E. 2010.

    Reining in the outliers in map-reduce clusters using mantri.

    In Proc. of OSDI.
  • [Ananthanarayanan et al.2013] Ananthanarayanan, G.; Ghodsi, A.; Shenker, S.; and Stoica, I. 2013. Effective straggler mitigation: Attack of the clones. In Proc. of NSDI, 185–198.
  • [Ananthanarayanan et al.2014] Ananthanarayanan, G.; Hung, M. C.-C.; Ren, X.; Stoica, I.; Wierman, A.; and Yu, M. 2014. Grass: Trimming stragglers in approximation analytics. In Proc. of NSDI, 289–302.
  • [Awerbuch and Khandekar2008] Awerbuch, B., and Khandekar, R. 2008. Stateless distributed gradient descent for positive linear programs. In Proc. of STOC, STOC ’08, 691.
  • [Balakrishnan, Magnanti, and Wong1989] Balakrishnan, A.; Magnanti, T. L.; and Wong, R. T. 1989. A dual-ascent procedure for large-scale uncapacitated network design. Oper. Res. 37(5):716–740.
  • [Bar-Yehuda and Even1981] Bar-Yehuda, R., and Even, S. 1981. A linear-time approximation algorithm for the weighted vertex cover problem. J. of Algs. 2(2):198 – 203.
  • [Bartal, Byers, and Raz2004] Bartal, Y.; Byers, J. W.; and Raz, D. 2004. Fast distributed approximation algorithms for positive linear programming with applications to flow control. SIAM J. on Comp. 33(6):1261–1279.
  • [Bertsimas and Vohra1998] Bertsimas, D., and Vohra, R. 1998. Rounding algorithms for covering problems. Math. Prog. 80(1):63–89.
  • [Bienstock and Iyengar2006] Bienstock, D., and Iyengar, G. 2006. Approximating fractional packings and coverings in o(1/epsilon) iterations. SIAM J. Comp. 35(4):825–854.
  • [Boyd and Vandenberghe2004] Boyd, S., and Vandenberghe, L. 2004. Convex Optimization. Cambridge University Press.
  • [Buchbinder and Naor2009] Buchbinder, N., and Naor, J. 2009. The design of competitive online algorithms via a primal-dual approach. Found. and Trends in Theoretical Computer Science 3(2-3):93–263.
  • [Burger et al.2012] Burger, M.; Notarstefano, G.; Bullo, F.; and Allgower, F. 2012. A distributed simplex algorithm for degenerate linear programs and multi-agent assignment. Automatica 48(9):2298–2304.
  • [Byers and Nasser2000] Byers, J., and Nasser, G. 2000. Utility-based decision-making in wireless sensor networks. In Mobile and Ad Hoc Networking and Comp., 143–144.
  • [Candes and Plan2011] Candes, E., and Plan, Y. 2011. Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. IEEE Trans. on Info. Theory 57(4):2342–2359.
  • [Candes, Romberg, and Tao2006] Candes, E.; Romberg, J.; and Tao, T. 2006. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 52(2):489 – 509.
  • [Devanur and Hayes2009] Devanur, N. R., and Hayes, T. P. 2009. The adwords problem: online keyword matching with budgeted bidders under random permutations. In Proc. of EC, 71–78.
  • [Donoho and Tanner2005] Donoho, D. L., and Tanner, J. 2005. Sparse nonnegative solution of underdetermined linear equations by linear programming. In Proc. of the National Academy of Sciences of the USA, 9446–9451.
  • [Donoho2006] Donoho, D. L. 2006. Compressed sensing. IEEE Trans. Inform. Theory 52:1289–1306.
  • [Erlenkotter1978] Erlenkotter, D. 1978. A dual-based procedure for uncapacitated facility location. Oper. Res. 26(6):992–1009.
  • [Goemans and Williamson1995] Goemans, M. X., and Williamson, D. P. 1995. A general approximation technique for constrained forest problems. SIAM J. on Comp. 24(2):296–317.
  • [Leskovec and Krevl2014] Leskovec, J., and Krevl, A. 2014. SNAP Datasets: Stanford large network dataset collection. http://snap.stanford.edu/data.
  • [London et al.2017] London, P.; Chen, N.; Vardi, S.; and Wierman, A. 2017. Distributed optimization via local computation algorithms. http://users.cms.caltech.edu/ plondon/loco.pdf.
  • [Luby and Nisan1993] Luby, M., and Nisan, N. 1993. A parallel approximation algorithm for positive linear programming. In Proc. of STOC, 448–457.
  • [Mansour et al.2012] Mansour, Y.; Rubinstein, A.; Vardi, S.; and Xie, N. 2012. Converting online algorithms to local computation algorithms. In Proc. of ICALP, 653–664.
  • [Mohan et al.2014] Mohan, K.; London, P.; Fazel, M.; Witten, D.; and Lee, S.-I. 2014. Node-based learning of multiple gaussian graphical models. JMLR 15:445–488.
  • [Nesterov2005] Nesterov, Y. 2005. Smooth minimization of non-smooth functions. Math. Prog. 103(1):127–152.
  • [Notarstefano and Bullo2011] Notarstefano, G., and Bullo, F. 2011. Distributed abstract optimization via constraints consensus: Theory and applications. IEEE Trans. Autom. Control 56(10):2247–2261.
  • [Orlik and Terao1992] Orlik, P., and Terao, H. 1992. Arrangements of Hyperplanes. Grundlehren der mathematischen Wissenschaften. Springer-Verlag Berlin Heidelberg.
  • [Plotkin, Shmoys, and Tardos1995] Plotkin, S. A.; Shmoys, D. B.; and Tardos, E. 1995. Fast approximation algorithms for fractional packing and covering problems. Math. of Oper. Res. 20(2):257–301.
  • [Ravikumar, Agarwal, and Wainwright2010] Ravikumar, P.; Agarwal, A.; and Wainwright, M. J. 2010. Message passing for graph-structured linear programs: Proximal methods and rounding schemes. JMLR 11:1043–1080.
  • [Recht, Fazel, and Parrilo2010] Recht, B.; Fazel, M.; and Parrilo, P. A. 2010. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review 52(3):471–501.
  • [Ren et al.2015] Ren, X.; Ananthanarayanan, G.; Wierman, A.; and Yu, M. 2015. Hopper: Decentralized speculation-aware cluster scheduling at scale. In Proc. of SIGCOMM.
  • [Richert and Cortés2015] Richert, D., and Cortés, J. 2015. Robust distributed linear programming. Trans. Autom. Control 60(10):2567–2582.
  • [Riquelme, Johari, and Zhang2017] Riquelme, C.; Johari, R.; and Zhang, B. 2017. Online active linear regression via thresholding. In Proc. of AAAI.
  • [Sanghavi, Malioutov, and Willsky2008] Sanghavi, S.; Malioutov, D.; and Willsky, A. S. 2008. Linear programming analysis of loopy belief propagation for weighted matching. In Proc. of NIPS, 1273–1280.
  • [Sridhar et al.2013] Sridhar, S.; Wright, S. J.; Ré, C.; Liu, J.; Bittorf, V.; and Zhang, C. 2013. An approximate, efficient LP solver for LP rounding. In Proc. of NIPS, 2895–2903.
  • [Taskar, Chatalbashev, and Koller2004] Taskar, B.; Chatalbashev, V.; and Koller, D. 2004. Learning associative markov networks. In Proc. of ICML.
  • [Trevisan1998] Trevisan, L. 1998. Parallel approximation algorithms by positive linear programming. Algorithmica 21(1):72–88.
  • [van der Vaart and Wellner1996] van der Vaart, A., and Wellner, J. 1996. Weak Convergence and Empirical Processes With Applications to Statistics. Springer Series in Statistics. Springer-Verlag New York.
  • [Woodruff2014] Woodruff, D. P. 2014. Sketching as a tool for numerical linear algebra. Found. and Trends in Theoretical Computer Science 10(1-2):1–157.
  • [Yarmish and Slyke2009] Yarmish, G., and Slyke, R. 2009. A distributed, scalable simplex method. J. of Supercomputing 49(3):373–381.
  • [Young2001] Young, N. E. 2001. Sequential and parallel algorithms for mixed packing and covering. In Proc. of FOCS, 538–546.
  • [Yuan and Lin2007a] Yuan, M., and Lin, Y. 2007a. Model selection and estimation in the gaussian graphical model. Biometrika 94(10):19–35.
  • [Zurel and Nisan2001] Zurel, E., and Nisan, N. 2001. An efficient approximate allocation algorithm for combinatorial auctions. In Proc. of EC.