DeepAI AI Chat
Log In Sign Up

A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix

by   Daniel Dadush, et al.

Following the breakthrough work of Tardos in the bit-complexity model, Vavasis and Ye gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) max c^ x, Ax = b, x ≥ 0, A ∈R^m × n, Vavasis and Ye developed a primal-dual interior point method using a 'layered least squares' (LLS) step, and showed that O(n^3.5log (χ̅_A+n)) iterations suffice to solve (LP) exactly, where χ̅_A is a condition measure controlling the size of solutions to linear systems related to A. Monteiro and Tsuchiya, noting that the central path is invariant under rescalings of the columns of A and c, asked whether there exists an LP algorithm depending instead on the measure χ̅^*_A, defined as the minimum χ̅_AD value achievable by a column rescaling AD of A, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an O(m^2 n^2 + n^3) time algorithm which works on the linear matroid of A to compute a nearly optimal diagonal rescaling D satisfying χ̅_AD≤ n(χ̅^*)^3. This algorithm also allows us to approximate the value of χ̅_A up to a factor n (χ̅^*)^2. As our second main contribution, we develop a scaling invariant LLS algorithm, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved O(n^2.5log nlog (χ̅^*_A+n)) iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor n/log n improvement on the iteration complexity bound of the original Vavasis-Ye algorithm.


page 1

page 2

page 3

page 4


Revisiting Tardos's Framework for Linear Programming: Faster Exact Solutions using Approximate Solvers

In breakthrough work, Tardos (Oper. Res. '86) gave a proximity based fra...

Interior point methods are not worse than Simplex

Whereas interior point methods provide polynomial-time linear programmin...

Online Linear Programming: Dual Convergence, New Algorithms, and Regret Bounds

We study an online linear programming (OLP) problem under a random input...

Solving Jigsaw Puzzles with Linear Programming

We propose a novel Linear Program (LP) based formula- tion for solving j...

Linear Programming using Limited-Precision Oracles

Since the elimination algorithm of Fourier and Motzkin, many different m...

A Quantum IP Predictor-Corrector Algorithm for Linear Programming

We introduce a new quantum optimization algorithm for Linear Programming...

Parallel FPGA Router using Sub-Gradient method and Steiner tree

In the FPGA (Field Programmable Gate Arrays) design flow, one of the mos...