# A Fast Polynomial-time Primal-Dual Projection Algorithm for Linear Programming

Traditionally, there are several polynomial algorithms for linear programming including the ellipsoid method, the interior point method and other variants. Recently, Chubanov [Chubanov, 2015] proposed a projection and rescaling algorithm, which has become a potentially practical class of polynomial algorithms for linear feasibility problems and also for the general linear programming. However, the Chubanov-type algorithms usually perform much better on the infeasible instances than on the feasible instances in practice. To explain this phenomenon, we derive a new theoretical complexity bound for the infeasible instances based on the condition number, which shows that algorithms can indeed run much faster on infeasible instances in certain situations. In order to speed up the feasible instances, we propose a Polynomial-time Primal-Dual Projection algorithm (called PPDP) by explicitly developing the dual algorithm. The numerical results validate that our PPDP algorithm achieves a quite balanced performance between feasible and infeasible instances, and its performance is remarkably better than previous algorithms.

## Authors

• 19 publications
• 231 publications
• 1 publication
• ### Getting Feasible Variable Estimates From Infeasible Ones: MRF Local Polytope Study

This paper proposes a method for construction of approximate feasible pr...
10/15/2012 ∙ by Bogdan Savchynskyy, et al. ∙ 0

• ### Improved linear programming methods for checking avoiding sure loss

We review the simplex method and two interior-point methods (the affine ...
08/09/2018 ∙ by Nawapon Nakharutai, et al. ∙ 0

• ### The Dual Matrix Algorithm for Linear Programming

The Dual Matrix Algorithm, variations of which were proposed in [A.Yu.Le...
12/31/2020 ∙ by Leonid A. Levin, et al. ∙ 0

• ### Toward a new instances of NELL

We are developing the method to start new instances of NELL in various l...
10/11/2016 ∙ by Maisa C. Duarte, et al. ∙ 0

• ### Extreme Flow Decomposition for Multi-Source Multicast with Intra-Session Network Coding

Network coding (NC), when combined with multipath routing, enables a lin...
12/25/2020 ∙ by Jianwei Zhang, et al. ∙ 0

• ### Speeding up Linear Programming using Randomized Linear Algebra

Linear programming (LP) is an extremely useful tool and has been success...
03/18/2020 ∙ by Agniva Chowdhury, et al. ∙ 0

• ### Role of New Kernel Function in Complexity Analysis of an Interior Point Algorithm for Semi definite Linear Complementarity Problem

In this paper, we introduce a new kernel function which differs from pre...
08/16/2021 ∙ by Nabila Abdessemed, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Linear programming is a fundamental problem in many areas, such as operations research, network, machine learning, business analysis and finance

(von Neumann and Morgenstern, 1947; Dantzig, 1963; Luenberger and Ye, 1984; Boyd and Vandenberghe, 2004). In this paper, we consider the maximum support of the linear feasibility problem

 find x∈Rn (1) subject to Ax=0, x≥0, x≠0,

with its dual problem

 find u∈Rm (2) subject to ATu>0,

where is an integer (or rational) matrix and . The maximum support means that the set of positive coordinates of the returned solution of (1) should be inclusion-wise maximum. Actually, for the solution returned by our algorithm, any coordinate if and only if this coordinate equals to 0 for all feasible solutions of (1). Thus, our algorithm can be directly used to test the feasibility of the general linear system with the same time complexity, i.e., given the maximum support solution to the system , if then the original problem has a solution , otherwise it is infeasible.

There are many (polynomial-time) algorithms for solving linear programming problems, e.g., (Karmarkar, 1984), (Wright, 1997) and (Renegar, 1988). Recently, Chubanov (2015) proposed a polynomial-time projection and rescaling algorithm for solving problem (1). Due to its simplicity and efficiency, this kind of algorithms has become a potentially practical class of polynomial algorithms. See e.g., (Dadush et al., 2016), (Roos, 2018) and (Pena and Soheili, 2018).

Chubanov’s algorithm (Chubanov, 2015) and its variants typically consist of two procedures. The key part is basic procedure (BP) and the other part is main algorithm (MA). The BP returns one of the following three results:

1. a feasible solution of (1);

2. a feasible solution of the dual problem (2);

3. a cut for the feasible region of (1).

Note that exactly one of (1) and (2) is feasible according to Farkas’ lemma. Thus (1) is infeasible if BP returns (ii). If BP returns (iii), the other procedure MA rescales the matrix by using this cut and call BP again on the rescaled matrix . According to (Khachian, 1979) which gives a positive lower bound on the entries of a solution of a linear system, after a certain number of rescalings, one can conclude that there is no feasible solution for (1). So the number of rescaling operations can be bounded, i.e., the number of MA calls can be bounded. Consequently, the algorithm can terminate in finite time no matter whether problem (1) is feasible or infeasible.

To be more precise, we quantify the time complexity. The total time complexity of these Chubanov-type algorithms are typically , where denotes the number of MA calls (rescaling operations), and denotes the time required by the basic procedure BP. According to the classic lower bound (Khachian, 1979), can simply be bounded by for these Chubanov-type algorithms, where denotes the bit size of . However,

is the most important and tricky part. Theoretical results and practical performances vary for different BP procedures. The typical BP procedures include the perceptron method, von Neumann’s method, and their variants (see e.g.,

(Dantzig, 1992; Dunagan and Vempala, 2008; Dadush et al., 2016; Pena and Soheili, 2017)). We review more details of these BP in Section 2.1. Usually, equals to or in these BP procedures. In this work, we improve by a factor of if (1) or (2) is well-conditioned (measured by (6)), but in the worse case, still equals to in our algorithm.

Our Motivation: In practice, these Chubanov-type projection and rescaling algorithms usually run much faster on the primal infeasible instances (i.e., (1) is infeasible) than on the primal feasible instances (i.e., dual infeasible) no matter what basic procedure (von Neumann, perceptron or their variants) we use (also see Table 2 in Section 6). In this paper, we try to explain this phenomenon theoretically. Moreover, we try to provide a new algorithm to address this issue.

Our Contribution: Concretely, we make the following technical contributions:

1. First, for the theoretical explanation, we provide Lemma 4 which shows that the time complexity can be rather than (see Lemma 3) in certain situations if (1) is infeasible. This gives an explanation of why these Chubanov-type algorithms usually run much faster if (1) is infeasible.

2. Then, we explicitly develop the dual algorithm (see Section 4) to improve the performance when (1) is feasible. Our dual algorithm is the first algorithm which rescales the row space of in MA (see Table 1). As a result, we provide a similar Lemma 8 which shows that the time complexity of our dual algorithm can be rather than in certain situations if (2) is infeasible (i.e. (1) is feasible).

Naturally, we obtain a new fast polynomial primal-dual projection algorithm (called ) by integrating our primal algorithm (which runs faster on the primal infeasible instances) and our dual algorithm (which runs faster on the primal feasible instances). See Section 5.

3. Finally, the numerical results validate that our primal-dual algorithm is quite balanced between feasible and infeasible instances, and it runs significantly faster than other algorithms (see Table 2 in Section 6).

Remark: Our algorithms are based on Dadush-Végh-Zambelli algorithm (Dadush et al., 2016) and the improvements of Roos’s algorithm (Roos, 2018) (see Section 2.3 and Table 1). Besides, we introduce a new step-size term for practical consideration (see Line 13 and 14 of Algorithm 1 and 3). For the maximum support problem (1), for Chubanov’s algorithm and Roos’s algorithm, and for Dadush-Végh-Zambelli algorithm. Note that in the worst case for our algorithms, but it can be improved by a factor of in certain situations. Recall that for these Chubanov-type algorithms as we discussed before. Thus the time complexity of our algorithms (in the worst case) match the result of Dadush-Végh-Zambelli algorithm (Dadush et al., 2016), i.e., (see our Theorems 13). However, we point out that the total time complexity of Chubanov’s algorithm and Roos’s algorithm are , and hence is faster than ours. They speed up it from to by using an amortized analysis while we currently do not use. We leave this speedup as a future work.

Organization: In Section 2, we introduce some useful notations and review some related algorithms. The details and results for our primal algorithm and dual algorithm are provided in Section 3 and Section 4, respectively. Then, in Section 5, we propose the efficient primal-dual algorithm. Finally, we conduct the numerical experiments in Section 6 and include a brief conclusion in Section 7.

## 2 Preliminaries

In this section, we first review some classic basic procedures and then introduce some notations to review some related algorithms at the end of this section.

### 2.1 Classic Basic Procedures

Recall that BP returns one of the following three results: (i) a feasible solution of (1); (ii) a feasible solution of the dual problem (2); (iii) a cut for the feasible region of (1). Here we focus on the first two outputs, the last one is controlled by an upper bound lemma (similar to Lemma 1).

Letting , when solving (2), we know that there is at least an index such that , where is the th-column of (otherwise is already a feasible solution for (2)). On the other hand, to solve (1), we want to minimize . The goal is to let go to 0 (in which case is a feasible solution for (1)). We review some classic update methods as follows:

von Neumann’s algorithm: In each iteration, find an index such that , and then update and as

 \definecolor[named]pgfstrokecolorrgb1,0,0\pgfsys@color@rgb@stroke100\pgfsys@color@rgb@fill100y′=αy+βak,x′=αx+βek(note that y=Ax and \definecolor[named]pgfstrokecolorrgb1,0,0\pgfsys@color@rgb@stroke100\pgfsys@color@rgb@fill100y′=Ax′), (3)

where are chosen such that is smallest and (Dantzig, 1992).

Perceptron: Choose in (3) at every iteration. See e.g. (Rosenblatt, 1957; Novikoff, 1962).

Dunagan-Vempala: Fix and choose to minimize (Dunagan and Vempala, 2008).

### 2.2 Notations

Before reviewing the related algorithms (in the following Section 2.3), we need to define/recall some useful notations. We use and to denote the projections of onto the null space () and row space () of the matrix , respectively:

 PA≜I−AT(AAT)†A,QA≜AT(AAT)†A.

where denotes the Moore-Penrose pseudoinverse. Particularly, if .

We further define the following notations:

 v=QAy∈RA,z=PAy∈NA,y=v+z∈Rn. (4)

Usually, is used to denote the feasible solution of (1) and indicates the feasibility of (2).

To analyze case (iii) of BP, we note that (1) is feasible if and only if the system

 Ax=0, x∈[0,1]n, x≠0 (5)

is feasible since (1) is a homogeneous system. From now on, we will consider problem (5) instead of (1). Similarly, we use a normalized version (7) to replace (2). Now, we recall a useful lemma which gives an upper bound for the coordinates of any feasible solution. This upper bound will indicate a cut for case (iii).

###### Lemma 1 ((Roos, 2018))

Let be any feasible solution of (5), and are defined as in (4), then every non-zero coordinate of gives rise to an upper bound for , according to

 xj≤boundj(y)≜1T[v−vj]+,

where and

denotes the all-ones vector.

This means that we can scale the column of by a factor to make the feasible solutions of (5) closer to the all-ones vector . Similarly to , we denote . Furthermore, we need the definition of condition number for a matrix (Goffin, 1980):

 ρ(Q)≜maxx:∥x∥2=1mini⟨x,qi∥qi∥2⟩, (6)

where is the th-column of .

### 2.3 Related Algorithms

Now, we are able to review some related algorithms for solving (5) based on the BP procedures introduced in Section 2.1.

Chubanov’s algorithm: Instead of updating in the original space , Chubanov (2015) updates in the projection space , where is a null space projection of . In each BP iteration, it updates and in the same way as von Neumann’s update (just replacing by ). Intuitively, BP either finds a feasible solution of (5) or finds a cut (i.e., an index such that for any feasible solution of (5) in ). Then the main algorithm MA rescales the null space of by dividing the th-column of by 2. According to [Khachian, 1979], there is a lower bound for the feasible solutions of (5). Thus the number of rescaling operations can be bounded. Finally, the algorithm terminates in polynomial-time, where either BP returns a feasible solution or MA claims the infeasibility according to the lower bound.

Roos’s algorithm: Roos (2015, 2018) provided two improvements of Chubanov’s algorithm:

1. A new cut condition was proposed, which is proved better than the one used by Chubanov.

2. The BP can use multiple indices to update and () in each iteration, e.g., a set of indices satisfying . Recall that von Neumann’s update only uses one index satisfying .

Dadush-Végh-Zambelli: Compared with Chubanov’s algorithm, Dadush et al. (2016) used the Dunagan-Vempala update instead of von Neumann’s update as its BP, along with Roos’ new cut condition. Besides, the updates are performed in the orthogonal space , where is a row space projection matrix of . But the rescaling space in MA is the same, i.e., the null space of .

Comparison: To demonstrate it clearly, we provide a comparison of our algorithms with other algorithms in Table 1. Note that our primal-dual algorithm is the integration of our primal algorithm and dual algorithm.

## 3 Our Primal Algorithm

In this section, we introduce our primal algorithm which consists of the basic procedure BP and the main algorithm MA. The details of BP and MA are provided in Section 3.1 and Section 3.2 respectively.

### 3.1 Basic Procedure (BP)

Our BP is similar to (Dadush et al., 2016) (or (Dunagan and Vempala, 2008)) (see Table 1). The details are described in Algorithm 1. The main difference is that we use multiple indices to update (see Line 9 of Algorithm 1) and introduce the step-size for practical consideration (see Line 13 and 14 of Algorithm 1).

In the BP (Algorithm 1), the norm of the iterated vector is decreasing, while each coordinate of is increasing. Thus, after a certain number of iterations, we will obtain a feasible solution . Otherwise, it is always possible to find a cut (Line 16), along with some rescaling operations for the matrix , to make the feasible solutions of (5) closer to the all-ones vector. The cut is guaranteed by the following lemma.

###### Lemma 2

Let be the projection matrix at a given iteration of BP (Algorithm 1). Suppose that , then the set is nonempty and every solution of problem (5) satisfies for all .

This lemma is proved with Lemma 1 and we defer the proof to Appendix B.1.

For the time complexity of Algorithm 1, i.e. , we give the following lemma (the proof is in Appendix B.2).

###### Lemma 3

The time complexity of Algorithm 1 . Concretely, it uses at most iterations and each iteration costs at most time.

Note that Lemma 3 holds regardless (5) is feasible or infeasible. However, as we discussed before, the algorithm usually performs much better on the infeasible instances than on the feasible instances. To explain this phenomenon, we give the following lemma. The proof is deferred to Appendix B.3.

###### Lemma 4

If (5) is infeasible, the time complexity of Algorithm 1 , where is the condition number defined in (6). In particular, equals to under well-condition (e.g.,

is an identity matrix), then

if problem (5) is infeasible.

### 3.2 Main Algorithm (MA)

The details of our MA are described in Algorithm 2. Particularly, we rescale the null space of in Line 8.

Now, we state the complexity of our primal algorithm in the following theorem. The proof is deferred to Appendix B.4

###### Theorem 1

The time complexity of the primal algorithm is .

## 4 Our Dual Algorithm

The Chubanov-type algorithms all focus on the primal problem (1) (or the normalized version (5)), i.e., their MA always rescale the null space of (see Table 1). We emphasize that these algorithms usually perform much better on the infeasible instances than on the feasible ones (see our Lemma 4 which gives an explanation). Now, we want to address this unbalanced issue by providing a dual algorithm. Our dual algorithm explicitly considers the dual problem (2) and rescales the row space of , unlike the previous algorithms. We already know that the primal algorithm runs faster on the primal infeasible instances. Thus we expect the dual algorithm runs faster on the dual infeasible instances (i.e., primal feasible instances). As expected, our dual algorithm does work. Therefore, in Section 5, we integrate our primal algorithm and dual algorithm to obtain a quite balanced primal-dual algorithm and its performance is also remarkably better than the previous algorithms.

Similar to our primal algorithm, the dual algorithm also consists of the basic procedure BP and the main algorithm MA. The details of BP and MA are provided in Section 4.1 and Section 4.2 respectively. Similar to (5), we consider the normalized version of (2) due to the homogeneity:

 findu∈Rmsubjecttox=ATu>0,x∈(0,1]n. (7)

### 4.1 Basic Procedure for the Dual Problem

The basic procedure for the dual problem is described in Algorithm 3.

In this basic procedure, either a feasible solution for the primal problem is found, or a dual feasible solution is found, or a cut of the bounded row space is found (which is denoted as in Line 17). Now, we need to provide an upper bound in Lemma 5, which shows that a cut of the bounded row space can be derived from , instead of in the case of null space (see Lemma 1).

###### Lemma 5

Let be any feasible solution of (7) and for some . Then every non-zero coordinate of gives rise to an upper bound for , according to

 xj≤bound′j(y)≜1T[z−zj]+.

The proof of this lemma is deferred to Appendix B.5. According to this lemma, we can obtain the following guaranteed cut in Lemma 6, which is similar to Lemma 2. The proof is almost the same as Lemma 2 just by replacing Lemma 1 with our Lemma 5.

###### Lemma 6

Let be the projection matrix at a given iteration of BP (Algorithm 3). Suppose that , then the set is nonempty and every solution of problem (5) satisfies for all .

Same to Algorithm 1, for the time complexity of the dual Algorithm 3, we have the following lemma.

###### Lemma 7

The time complexity of Algorithm 3 . Concretely, it uses at most iterations and each iteration costs at most time.

Note that Lemma 3 also holds regardless (7) is feasible or infeasible. Now, we want to point out that our dual algorithm can perform much better on the dual infeasible instances (primal feasible instances) under well-condition as we expected and discussed before. Similar to Lemma 4, we have the following lemma for the dual algorithm.

###### Lemma 8

If (7) is infeasible, the time complexity of Algorithm 1 , where is the condition number defined in (6). In particular, equals to under well-condition (e.g., , where is an identity matrix), then if problem (7) is infeasible.

Note that it is easy to see that (5) is feasible (i.e., (7) is infeasible) if .

Besides, when the dual problem (7) is feasible, we can also utilize the geometry of the problem to bound the iteration complexity instead of Lemma 7. Consider the following kind of condition number of the set :

 δ∞(Im(A)⋂[0,1]n)≜maxx{∏ixi:x∈Im(A)⋂[0,1]n}.

As each rescaling in the basic procedure will at least enlarge the value of by two times, and the largest possible value of for all matrices is 1, it takes at most basic procedures before getting a feasible solution. This means that the iteration complexity of the whole algorithm is .

### 4.2 Main Algorithm for the Dual Problem

The main algorithm for the dual problem is described in Algorithm 4. Particularly, we rescale the row space of in Line 6.

Now, we have the following theorem for our dual algorithm. The proof is provided in Appendix B.6.

###### Theorem 2

The time complexity of the dual algorithm is .

## 5 Our Primal-Dual PPDP Algorithm

In this section, we propose a new polynomial primal-dual projection algorithm (called ) to take advantages of our primal algorithm and dual algorithm. Similarly, the algorithm also consists of two procedures (MA and BP). Intuitively, the BP solves problems (1) and (2) simultaneously. Recall that the primal algorithm runs faster on the infeasible instances and the dual algorithm runs faster on the feasible instances (see Table 2). The MA rescales the matrix (row space or null space) according to the output of BP. The MA and BP are formally described in Algorithms 5 and 6 respectively. The details are deferred to Appendix A. Thus, we have the following theorem.

###### Theorem 3

The time complexity of our primal-dual algorithm is .

Note that the final output of our algorithm is a feasible solution for either (1) or (2). Obviously, the algorithm will stop whenever it finds a solution of (1) or (2), thus the time complexity of our algorithm follows easily from Theorems 1 and 2.

## 6 Experiments

In this section, we compare the performance of our algorithms with Roos’ algorithm (Roos, 2018) and Gurobi (one of the fastest solvers nowadays). We conduct the experiments on the randomly generated matrices. Concretely, we generate integer matrices of size , with each entry uniformly randomly generated in the interval . The parameter is the step-size which is a new practical term introduced in this work. The average running time of these algorithms are listed in Table 2.

Table 2 validates that our new primal-dual algorithm is quite balanced on the feasible and infeasible instances due to the integration of our primal and dual algorithm. Moreover, it shows that our algorithm can be a practical option for linear programming since it runs remarkably faster than the fast optimization solver Gurobi.

## 7 Conclusion

In this paper, we try to theoretically explain why the Chubanov-type projection algorithms usually run much faster on the primal infeasible instances. Furthermore, to address this unbalanced issue, we provide a new fast polynomial primal-dual projection algorithm (called ) by integrating our primal algorithm (which runs faster on the primal infeasible instances) and our dual algorithm (which runs faster on the primal feasible instances). As a start, we believe more improvements (e.g., the amortized analysis speedup) can be made for the Chubanov-type projection algorithms both theoretically and practically.

## Acknowledgments

We would like to thank Jian Li (Tsinghua University), Yuanxi Dai (Tsinghua University) and Rong Ge (Duke University) for useful discussions.

## References

• Boyd and Vandenberghe [2004] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
• Chubanov [2015] Sergei Chubanov. A polynomial projection algorithm for linear feasibility problems. Mathematical Programming, 153(2):687–713, 2015.
• Dadush et al. [2016] Daniel Dadush, László A Végh, and Giacomo Zambelli. Rescaled coordinate descent methods for linear programming. In

International Conference on Integer Programming and Combinatorial Optimization

, pages 26–37. Springer, 2016.
• Dantzig [1963] George Bernard Dantzig. Linear programming and extensions. Princeton university press, 1963.
• Dantzig [1992] George Bernard Dantzig. An -precise feasible solution to a linear program with a convexity constraint in 1/ iterations independent of problem size. Technical report, Technical Report SOL 92-5, Stanford University, 1992.
• Dunagan and Vempala [2008] John Dunagan and Santosh Vempala. A simple polynomial-time rescaling algorithm for solving linear programs. Mathematical Programming, 114(1):101–114, 2008.
• Goffin [1980] Jean-Louis Goffin. The relaxation method for solving systems of linear inequalities. Mathematics of Operations Research, 5(3):388–414, 1980.
• Karmarkar [1984] Narendra Karmarkar. A new polynomial-time algorithm for linear programming. In

Proceedings of the sixteenth annual ACM symposium on Theory of computing

, pages 302–311. ACM, 1984.
• Khachian [1979] Leonid G Khachian. A polynomial algorithm in linear programming. Doklady Akademii Nauks SSR, 244(5):1093–1096, 1979.
• Luenberger and Ye [1984] David G Luenberger and Yinyu Ye. Linear and nonlinear programming. Springer, 1984.
• Novikoff [1962] Albert B Novikoff. On convergence proofs for perceptrons. In Proceedings of the Symposium on Mathematical Theory of Automata, pages 615–622, 1962.
• Pena and Soheili [2017] Javier Pena and Negar Soheili. Solving conic systems via projection and rescaling. Mathematical Programming, 166(1-2):87–111, 2017.
• Pena and Soheili [2018] Javier Pena and Negar Soheili. Computational performance of a projection and rescaling algorithm. arXiv preprint arXiv:1803.07107, 2018.
• Renegar [1988] James Renegar. A polynomial-time algorithm, based on newton’s method, for linear programming. Mathematical Programming, 40(1-3):59–93, 1988.
• Roos [2015] Kees Roos. On chubanov s method for solving a homogeneous inequality system. In Numerical Analysis and Optimization, pages 319–338. Springer, 2015.
• Roos [2018] Kees Roos. An improved version of chubanov’s method for solving a homogeneous feasibility problem. Optimization Methods and Software, 33(1):26–44, 2018.
• Rosenblatt [1957] Frank Rosenblatt. The perceptron–a perciving and recognizing automation. Technical report, Report 85-460-1 Cornell Aeronautical Laboratory, 1957.
• von Neumann and Morgenstern [1947] John von Neumann and Oskar Morgenstern. Theory of games and economic behavior. Princeton university press, 1947.
• Wright [1997] Stephen J Wright. Primal-dual interior-point methods, volume 54. Siam, 1997.

## Appendix A Details of PPDP Algorithm

In this appendix, we describe the details of our Algorithm. Concretely, the main procedure is an integration of Algorithms 2 and 4, which is formally described in Algorithm 5. The basic procedure is an integration of Algorithms 1 and 3, which is formally described in Algorithm 6.

## Appendix B Missing Proofs

In this appendix, we provide all the proofs for Theorems 12 and Lemmas 25.

### b.1 Proof of Lemma 2

As the basic procedure has not terminated, cannot be primal feasible. Initially . For each iteration, the operation

 y=y−c(α∥qK∥2∑k∈Kek)

will only increase some components of the vector by . Thus each component of is at least during the whole procedure， implying that there exists an index with . Otherwise will be primal feasible. For this specific , we have

 boundj(y) =−∑ni=1v−ivj=−∑k∈K⟨ek,v⟩vj =−∑k∈K⟨QAek,v⟩vj=−⟨qK,v⟩vj ≤n∥qK∥2threshold≤12,

where the last inequality follows from

 ∥qK∥2≤∥∑k∈Kek∥≤√n.

### b.2 Proof of Lemma 3

When , the decrease of in each iteration has a lower bound:

 ∥v−cα∥qK∥2qK∥22=∥v∥22−(2c−c2)α2≤∥v∥22−(2c−c2)14n3.

Initially, . After iterations, . So it takes at most iterations to obtain a vector with , in which case a primal feasible solution can be obtained. This means that each basic procedure takes at most iterations before stopping.

Now it remains to bound the time complexity in each iteration. In each basic procedure iteration, we find all indices such that and do the calculation . In the worst case, can be , thus arithmetic operations are needed to calculate this summation. However, there is another way to do this. Recall that , thus the number of basis vectors of the rows are exactly , while the other rows can be represented by a weighted summation of these basis vectors. This means that we only have to do the naive summation for these rows, while the other rows can be obtained by doing the weighted summation of these elements， which cost arithmetic operations. Note that the

basis vectors can be computed by the Singular Value Decomposition (SVD). Then the weights of the other

rows can be obtained by computing the inverse of an matrix and multiplying this inverse matrix to the row vectors. These steps cost operations and only needs to be done once at the beginning of the basic procedure.

### b.3 Proof of Lemma 4

When (5) is infeasible, i.e., . Denoting the vector as the center which achieves the value , we can check the closeness between and in each iteration:

 ⟨vt+1,w⟩=⟨vt,w⟩−c⟨vt,^qK⟩⟨w,^qK⟩≥⟨vt,w⟩+cρ(QA)2n√n.

On the other hand, as the norm before the basic procedure stops, we should have

 ρ(QA)+ctρ(QA)2n√n≤⟨vt,w⟩≤∥vt∥∥w∥≤√1n−(2c−c2)tn3.

This implies that the number of the iterations in the basic procedure is when the primal problem (5) is infeasible. This proof is finished by combining the result of Lemma 3, i.e., each iteration costs time.

### b.4 Proof of Theorem 1

According to a classic result of [Khachian, 1979], there exists a positive number (satisfying ) such that the positive coordinates of the basic feasible solutions of problem (5) are bounded below by . When the coordinate has been rescaled for more than times, the value of this coordinate in all the solutions must be . As a result, the corresponding columns of can be omitted.

According to Lemma 2, the cut is nonempty. It means each iteration of MA can rescale at least one coordinate of the feasible solutions by . Thus the number of rescaling operations can be bounded by , i.e., the number of iterations in MA . The proof is finished by combining this with Lemma 3.

### b.5 Proof of Lemma 5

Since , we have . Thus we consider the following two cases. For , we have

 −zjxj=∑i≠jzixi≤∑i,zi>0zixi≤∑i,zi>0zi=1Tz+.

On the other hand, for , we have

 zjxj=−∑i≠jzixi≤∑i,zi<0−zixi≤∑i,zi<0−zi=−1Tz−.

### b.6 Proof of Theorem 2

First, we note that (2) is feasible if and only if the problem

 PAx=0,x