Linear programming is a fundamental problem in many areas, such as operations research, network, machine learning, business analysis and finance(von Neumann and Morgenstern, 1947; Dantzig, 1963; Luenberger and Ye, 1984; Boyd and Vandenberghe, 2004). In this paper, we consider the maximum support of the linear feasibility problem
with its dual problem
where is an integer (or rational) matrix and . The maximum support means that the set of positive coordinates of the returned solution of (1) should be inclusion-wise maximum. Actually, for the solution returned by our algorithm, any coordinate if and only if this coordinate equals to 0 for all feasible solutions of (1). Thus, our algorithm can be directly used to test the feasibility of the general linear system with the same time complexity, i.e., given the maximum support solution to the system , if then the original problem has a solution , otherwise it is infeasible.
There are many (polynomial-time) algorithms for solving linear programming problems, e.g., (Karmarkar, 1984), (Wright, 1997) and (Renegar, 1988). Recently, Chubanov (2015) proposed a polynomial-time projection and rescaling algorithm for solving problem (1). Due to its simplicity and efficiency, this kind of algorithms has become a potentially practical class of polynomial algorithms. See e.g., (Dadush et al., 2016), (Roos, 2018) and (Pena and Soheili, 2018).
Chubanov’s algorithm (Chubanov, 2015) and its variants typically consist of two procedures. The key part is basic procedure (BP) and the other part is main algorithm (MA). The BP returns one of the following three results:
a feasible solution of (1);
a feasible solution of the dual problem (2);
a cut for the feasible region of (1).
Note that exactly one of (1) and (2) is feasible according to Farkas’ lemma. Thus (1) is infeasible if BP returns (ii). If BP returns (iii), the other procedure MA rescales the matrix by using this cut and call BP again on the rescaled matrix . According to (Khachian, 1979) which gives a positive lower bound on the entries of a solution of a linear system, after a certain number of rescalings, one can conclude that there is no feasible solution for (1). So the number of rescaling operations can be bounded, i.e., the number of MA calls can be bounded. Consequently, the algorithm can terminate in finite time no matter whether problem (1) is feasible or infeasible.
To be more precise, we quantify the time complexity. The total time complexity of these Chubanov-type algorithms are typically , where denotes the number of MA calls (rescaling operations), and denotes the time required by the basic procedure BP. According to the classic lower bound (Khachian, 1979), can simply be bounded by for these Chubanov-type algorithms, where denotes the bit size of . However,
is the most important and tricky part. Theoretical results and practical performances vary for different BP procedures. The typical BP procedures include the perceptron method, von Neumann’s method, and their variants (see e.g.,(Dantzig, 1992; Dunagan and Vempala, 2008; Dadush et al., 2016; Pena and Soheili, 2017)). We review more details of these BP in Section 2.1. Usually, equals to or in these BP procedures. In this work, we improve by a factor of if (1) or (2) is well-conditioned (measured by (6)), but in the worse case, still equals to in our algorithm.
Our Motivation: In practice, these Chubanov-type projection and rescaling algorithms usually run much faster on the primal infeasible instances (i.e., (1) is infeasible) than on the primal feasible instances (i.e., dual infeasible) no matter what basic procedure (von Neumann, perceptron or their variants) we use (also see Table 2 in Section 6). In this paper, we try to explain this phenomenon theoretically. Moreover, we try to provide a new algorithm to address this issue.
Our Contribution: Concretely, we make the following technical contributions:
Then, we explicitly develop the dual algorithm (see Section 4) to improve the performance when (1) is feasible. Our dual algorithm is the first algorithm which rescales the row space of in MA (see Table 1). As a result, we provide a similar Lemma 8 which shows that the time complexity of our dual algorithm can be rather than in certain situations if (2) is infeasible (i.e. (1) is feasible).
Naturally, we obtain a new fast polynomial primal-dual projection algorithm (called ) by integrating our primal algorithm (which runs faster on the primal infeasible instances) and our dual algorithm (which runs faster on the primal feasible instances). See Section 5.
Remark: Our algorithms are based on Dadush-Végh-Zambelli algorithm (Dadush et al., 2016) and the improvements of Roos’s algorithm (Roos, 2018) (see Section 2.3 and Table 1). Besides, we introduce a new step-size term for practical consideration (see Line 13 and 14 of Algorithm 1 and 3). For the maximum support problem (1), for Chubanov’s algorithm and Roos’s algorithm, and for Dadush-Végh-Zambelli algorithm. Note that in the worst case for our algorithms, but it can be improved by a factor of in certain situations. Recall that for these Chubanov-type algorithms as we discussed before. Thus the time complexity of our algorithms (in the worst case) match the result of Dadush-Végh-Zambelli algorithm (Dadush et al., 2016), i.e., (see our Theorems 1–3). However, we point out that the total time complexity of Chubanov’s algorithm and Roos’s algorithm are , and hence is faster than ours. They speed up it from to by using an amortized analysis while we currently do not use. We leave this speedup as a future work.
Organization: In Section 2, we introduce some useful notations and review some related algorithms. The details and results for our primal algorithm and dual algorithm are provided in Section 3 and Section 4, respectively. Then, in Section 5, we propose the efficient primal-dual algorithm. Finally, we conduct the numerical experiments in Section 6 and include a brief conclusion in Section 7.
In this section, we first review some classic basic procedures and then introduce some notations to review some related algorithms at the end of this section.
2.1 Classic Basic Procedures
Recall that BP returns one of the following three results: (i) a feasible solution of (1); (ii) a feasible solution of the dual problem (2); (iii) a cut for the feasible region of (1). Here we focus on the first two outputs, the last one is controlled by an upper bound lemma (similar to Lemma 1).
Letting , when solving (2), we know that there is at least an index such that , where is the th-column of (otherwise is already a feasible solution for (2)). On the other hand, to solve (1), we want to minimize . The goal is to let go to 0 (in which case is a feasible solution for (1)). We review some classic update methods as follows:
von Neumann’s algorithm: In each iteration, find an index such that , and then update and as
where are chosen such that is smallest and (Dantzig, 1992).
Dunagan-Vempala: Fix and choose to minimize (Dunagan and Vempala, 2008).
Before reviewing the related algorithms (in the following Section 2.3), we need to define/recall some useful notations. We use and to denote the projections of onto the null space () and row space () of the matrix , respectively:
where denotes the Moore-Penrose pseudoinverse. Particularly, if .
We further define the following notations:
To analyze case (iii) of BP, we note that (1) is feasible if and only if the system
is feasible since (1) is a homogeneous system. From now on, we will consider problem (5) instead of (1). Similarly, we use a normalized version (7) to replace (2). Now, we recall a useful lemma which gives an upper bound for the coordinates of any feasible solution. This upper bound will indicate a cut for case (iii).
Lemma 1 ((Roos, 2018))
2.3 Related Algorithms
Chubanov’s algorithm: Instead of updating in the original space , Chubanov (2015) updates in the projection space , where is a null space projection of . In each BP iteration, it updates and in the same way as von Neumann’s update (just replacing by ). Intuitively, BP either finds a feasible solution of (5) or finds a cut (i.e., an index such that for any feasible solution of (5) in ). Then the main algorithm MA rescales the null space of by dividing the th-column of by 2. According to [Khachian, 1979], there is a lower bound for the feasible solutions of (5). Thus the number of rescaling operations can be bounded. Finally, the algorithm terminates in polynomial-time, where either BP returns a feasible solution or MA claims the infeasibility according to the lower bound.
A new cut condition was proposed, which is proved better than the one used by Chubanov.
The BP can use multiple indices to update and () in each iteration, e.g., a set of indices satisfying . Recall that von Neumann’s update only uses one index satisfying .
Dadush-Végh-Zambelli: Compared with Chubanov’s algorithm, Dadush et al. (2016) used the Dunagan-Vempala update instead of von Neumann’s update as its BP, along with Roos’ new cut condition. Besides, the updates are performed in the orthogonal space , where is a row space projection matrix of . But the rescaling space in MA is the same, i.e., the null space of .
Comparison: To demonstrate it clearly, we provide a comparison of our algorithms with other algorithms in Table 1. Note that our primal-dual algorithm is the integration of our primal algorithm and dual algorithm.
|Algorithms||Update method||Update space||Rescaling space||#indices|
|Chubanov’s algorithm||von Neumann||Null space||Null space||One|
|Roos’ algorithm||von Neumann||Null space||Null space||Multiple|
|Dadush-Végh-Zambelli||Dunagan-Vempala||Row space||Null space||One|
|Our primal algorithm||Dunagan-Vempala||Row space||Null space||Multiple|
|Our dual algorithm||Dunagan-Vempala||Null space||Row space||Multiple|
3 Our Primal Algorithm
In this section, we introduce our primal algorithm which consists of the basic procedure BP and the main algorithm MA. The details of BP and MA are provided in Section 3.1 and Section 3.2 respectively.
3.1 Basic Procedure (BP)
Our BP is similar to (Dadush et al., 2016) (or (Dunagan and Vempala, 2008)) (see Table 1). The details are described in Algorithm 1. The main difference is that we use multiple indices to update (see Line 9 of Algorithm 1) and introduce the step-size for practical consideration (see Line 13 and 14 of Algorithm 1).
In the BP (Algorithm 1), the norm of the iterated vector is decreasing, while each coordinate of is increasing. Thus, after a certain number of iterations, we will obtain a feasible solution . Otherwise, it is always possible to find a cut (Line 16), along with some rescaling operations for the matrix , to make the feasible solutions of (5) closer to the all-ones vector. The cut is guaranteed by the following lemma.
The time complexity of Algorithm 1 . Concretely, it uses at most iterations and each iteration costs at most time.
Note that Lemma 3 holds regardless (5) is feasible or infeasible. However, as we discussed before, the algorithm usually performs much better on the infeasible instances than on the feasible instances. To explain this phenomenon, we give the following lemma. The proof is deferred to Appendix B.3.
3.2 Main Algorithm (MA)
The details of our MA are described in Algorithm 2. Particularly, we rescale the null space of in Line 8.
Now, we state the complexity of our primal algorithm in the following theorem. The proof is deferred to Appendix B.4
The time complexity of the primal algorithm is .
4 Our Dual Algorithm
The Chubanov-type algorithms all focus on the primal problem (1) (or the normalized version (5)), i.e., their MA always rescale the null space of (see Table 1). We emphasize that these algorithms usually perform much better on the infeasible instances than on the feasible ones (see our Lemma 4 which gives an explanation). Now, we want to address this unbalanced issue by providing a dual algorithm. Our dual algorithm explicitly considers the dual problem (2) and rescales the row space of , unlike the previous algorithms. We already know that the primal algorithm runs faster on the primal infeasible instances. Thus we expect the dual algorithm runs faster on the dual infeasible instances (i.e., primal feasible instances). As expected, our dual algorithm does work. Therefore, in Section 5, we integrate our primal algorithm and dual algorithm to obtain a quite balanced primal-dual algorithm and its performance is also remarkably better than the previous algorithms.
Similar to our primal algorithm, the dual algorithm also consists of the basic procedure BP and the main algorithm MA. The details of BP and MA are provided in Section 4.1 and Section 4.2 respectively. Similar to (5), we consider the normalized version of (2) due to the homogeneity:
4.1 Basic Procedure for the Dual Problem
The basic procedure for the dual problem is described in Algorithm 3.
In this basic procedure, either a feasible solution for the primal problem is found, or a dual feasible solution is found, or a cut of the bounded row space is found (which is denoted as in Line 17). Now, we need to provide an upper bound in Lemma 5, which shows that a cut of the bounded row space can be derived from , instead of in the case of null space (see Lemma 1).
Let be any feasible solution of (7) and for some . Then every non-zero coordinate of gives rise to an upper bound for , according to
The proof of this lemma is deferred to Appendix B.5. According to this lemma, we can obtain the following guaranteed cut in Lemma 6, which is similar to Lemma 2. The proof is almost the same as Lemma 2 just by replacing Lemma 1 with our Lemma 5.
The time complexity of Algorithm 3 . Concretely, it uses at most iterations and each iteration costs at most time.
Note that Lemma 3 also holds regardless (7) is feasible or infeasible. Now, we want to point out that our dual algorithm can perform much better on the dual infeasible instances (primal feasible instances) under well-condition as we expected and discussed before. Similar to Lemma 4, we have the following lemma for the dual algorithm.
Besides, when the dual problem (7) is feasible, we can also utilize the geometry of the problem to bound the iteration complexity instead of Lemma 7. Consider the following kind of condition number of the set :
As each rescaling in the basic procedure will at least enlarge the value of by two times, and the largest possible value of for all matrices is 1, it takes at most basic procedures before getting a feasible solution. This means that the iteration complexity of the whole algorithm is .
4.2 Main Algorithm for the Dual Problem
The main algorithm for the dual problem is described in Algorithm 4. Particularly, we rescale the row space of in Line 6.
Now, we have the following theorem for our dual algorithm. The proof is provided in Appendix B.6.
The time complexity of the dual algorithm is .
5 Our Primal-Dual Algorithm
In this section, we propose a new polynomial primal-dual projection algorithm (called ) to take advantages of our primal algorithm and dual algorithm. Similarly, the algorithm also consists of two procedures (MA and BP). Intuitively, the BP solves problems (1) and (2) simultaneously. Recall that the primal algorithm runs faster on the infeasible instances and the dual algorithm runs faster on the feasible instances (see Table 2). The MA rescales the matrix (row space or null space) according to the output of BP. The MA and BP are formally described in Algorithms 5 and 6 respectively. The details are deferred to Appendix A. Thus, we have the following theorem.
The time complexity of our primal-dual algorithm is .
In this section, we compare the performance of our algorithms with Roos’ algorithm (Roos, 2018) and Gurobi (one of the fastest solvers nowadays). We conduct the experiments on the randomly generated matrices. Concretely, we generate integer matrices of size , with each entry uniformly randomly generated in the interval . The parameter is the step-size which is a new practical term introduced in this work. The average running time of these algorithms are listed in Table 2.
|Algorithms||feasible instances||infeasible instances|
|Gurobi (a fast optimization solver)||3.08||1.58|
|Roos’s algorithm (Roos, 2018)||10.75||0.83|
|Our primal algorithm ()||9.93||0.48|
|Our dual algorithm ()||0.35||4.57|
|Our algorithm ()||0.60||0.58|
Table 2 validates that our new primal-dual algorithm is quite balanced on the feasible and infeasible instances due to the integration of our primal and dual algorithm. Moreover, it shows that our algorithm can be a practical option for linear programming since it runs remarkably faster than the fast optimization solver Gurobi.
In this paper, we try to theoretically explain why the Chubanov-type projection algorithms usually run much faster on the primal infeasible instances. Furthermore, to address this unbalanced issue, we provide a new fast polynomial primal-dual projection algorithm (called ) by integrating our primal algorithm (which runs faster on the primal infeasible instances) and our dual algorithm (which runs faster on the primal feasible instances). As a start, we believe more improvements (e.g., the amortized analysis speedup) can be made for the Chubanov-type projection algorithms both theoretically and practically.
We would like to thank Jian Li (Tsinghua University), Yuanxi Dai (Tsinghua University) and Rong Ge (Duke University) for useful discussions.
- Boyd and Vandenberghe  Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
- Chubanov  Sergei Chubanov. A polynomial projection algorithm for linear feasibility problems. Mathematical Programming, 153(2):687–713, 2015.
Dadush et al. 
Daniel Dadush, László A Végh, and Giacomo Zambelli.
Rescaled coordinate descent methods for linear programming.
International Conference on Integer Programming and Combinatorial Optimization, pages 26–37. Springer, 2016.
- Dantzig  George Bernard Dantzig. Linear programming and extensions. Princeton university press, 1963.
- Dantzig  George Bernard Dantzig. An -precise feasible solution to a linear program with a convexity constraint in 1/ iterations independent of problem size. Technical report, Technical Report SOL 92-5, Stanford University, 1992.
- Dunagan and Vempala  John Dunagan and Santosh Vempala. A simple polynomial-time rescaling algorithm for solving linear programs. Mathematical Programming, 114(1):101–114, 2008.
- Goffin  Jean-Louis Goffin. The relaxation method for solving systems of linear inequalities. Mathematics of Operations Research, 5(3):388–414, 1980.
A new polynomial-time algorithm for linear programming.
Proceedings of the sixteenth annual ACM symposium on Theory of computing, pages 302–311. ACM, 1984.
- Khachian  Leonid G Khachian. A polynomial algorithm in linear programming. Doklady Akademii Nauks SSR, 244(5):1093–1096, 1979.
- Luenberger and Ye  David G Luenberger and Yinyu Ye. Linear and nonlinear programming. Springer, 1984.
- Novikoff  Albert B Novikoff. On convergence proofs for perceptrons. In Proceedings of the Symposium on Mathematical Theory of Automata, pages 615–622, 1962.
- Pena and Soheili  Javier Pena and Negar Soheili. Solving conic systems via projection and rescaling. Mathematical Programming, 166(1-2):87–111, 2017.
- Pena and Soheili  Javier Pena and Negar Soheili. Computational performance of a projection and rescaling algorithm. arXiv preprint arXiv:1803.07107, 2018.
- Renegar  James Renegar. A polynomial-time algorithm, based on newton’s method, for linear programming. Mathematical Programming, 40(1-3):59–93, 1988.
- Roos  Kees Roos. On chubanov s method for solving a homogeneous inequality system. In Numerical Analysis and Optimization, pages 319–338. Springer, 2015.
- Roos  Kees Roos. An improved version of chubanov’s method for solving a homogeneous feasibility problem. Optimization Methods and Software, 33(1):26–44, 2018.
- Rosenblatt  Frank Rosenblatt. The perceptron–a perciving and recognizing automation. Technical report, Report 85-460-1 Cornell Aeronautical Laboratory, 1957.
- von Neumann and Morgenstern  John von Neumann and Oskar Morgenstern. Theory of games and economic behavior. Princeton university press, 1947.
- Wright  Stephen J Wright. Primal-dual interior-point methods, volume 54. Siam, 1997.
Appendix A Details of Algorithm
Appendix B Missing Proofs
b.1 Proof of Lemma 2
As the basic procedure has not terminated, cannot be primal feasible. Initially . For each iteration, the operation
will only increase some components of the vector by . Thus each component of is at least during the whole procedure， implying that there exists an index with . Otherwise will be primal feasible. For this specific , we have
where the last inequality follows from
b.2 Proof of Lemma 3
When , the decrease of in each iteration has a lower bound:
Initially, . After iterations, . So it takes at most iterations to obtain a vector with , in which case a primal feasible solution can be obtained. This means that each basic procedure takes at most iterations before stopping.
Now it remains to bound the time complexity in each iteration. In each basic procedure iteration, we find all indices such that and do the calculation . In the worst case, can be , thus arithmetic operations are needed to calculate this summation. However, there is another way to do this. Recall that , thus the number of basis vectors of the rows are exactly , while the other rows can be represented by a weighted summation of these basis vectors. This means that we only have to do the naive summation for these rows, while the other rows can be obtained by doing the weighted summation of these elements， which cost arithmetic operations. Note that the
basis vectors can be computed by the Singular Value Decomposition (SVD). Then the weights of the otherrows can be obtained by computing the inverse of an matrix and multiplying this inverse matrix to the row vectors. These steps cost operations and only needs to be done once at the beginning of the basic procedure.
b.3 Proof of Lemma 4
When (5) is infeasible, i.e., . Denoting the vector as the center which achieves the value , we can check the closeness between and in each iteration:
On the other hand, as the norm before the basic procedure stops, we should have
This implies that the number of the iterations in the basic procedure is when the primal problem (5) is infeasible. This proof is finished by combining the result of Lemma 3, i.e., each iteration costs time.
b.4 Proof of Theorem 1
According to a classic result of [Khachian, 1979], there exists a positive number (satisfying ) such that the positive coordinates of the basic feasible solutions of problem (5) are bounded below by . When the coordinate has been rescaled for more than times, the value of this coordinate in all the solutions must be . As a result, the corresponding columns of can be omitted.
b.5 Proof of Lemma 5
Since , we have . Thus we consider the following two cases. For , we have
On the other hand, for , we have