DeepAI AI Chat
Log In Sign Up

Revisiting Tardos's Framework for Linear Programming: Faster Exact Solutions using Approximate Solvers

by   Daniel Dadush, et al.

In breakthrough work, Tardos (Oper. Res. '86) gave a proximity based framework for solving linear programming (LP) in time depending only on the constraint matrix in the bit complexity model. In Tardos's framework, one reduces solving the LP min⟨ c,x⟩, Ax=b, x ≥ 0, A ∈ℤ^m × n, to solving O(nm) LPs in A having small integer coefficient objectives and right-hand sides using any exact LP algorithm. This gives rise to an LP algorithm in time poly(n,mlogΔ_A), where Δ_A is the largest subdeterminant of A. A significant extension to the real model of computation was given by Vavasis and Ye (Math. Prog. '96), giving a specialized interior point method that runs in time poly(n,m,logχ̅_A), depending on Stewart's χ̅_A, a well-studied condition number. In this work, we extend Tardos's original framework to obtain such a running time dependence. In particular, we replace the exact LP solves with approximate ones, enabling us to directly leverage the tremendous recent algorithmic progress for approximate linear programming. More precisely, we show that the fundamental "accuracy" needed to exactly solve any LP in A is inverse polynomial in n and logχ̅_A. Plugging in the recent algorithm of van den Brand (SODA '20), our method computes an optimal primal and dual solution using O(m n^ω+1log (n)log(χ̅_A+n)) arithmetic operations, outperforming the specialized interior point method of Vavasis and Ye and its recent improvement by Dadush et al (STOC '20). At a technical level, our framework combines together approximate LP solutions to compute exact ones, making use of constructive proximity theorems – which bound the distance between solutions of "nearby" LPs – to keep the required accuracy low.


page 1

page 2

page 3

page 4


A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix

Following the breakthrough work of Tardos in the bit-complexity model, V...

Linear Programming using Limited-Precision Oracles

Since the elimination algorithm of Fourier and Motzkin, many different m...

Improved Finite Blocklength Converses for Slepian-Wolf Coding via Linear Programming

A new finite blocklength converse for the Slepian- Wolf coding problem i...

A Friendly Smoothed Analysis of the Simplex Method

Explaining the excellent practical performance of the simplex method for...

Approximating Min-Mean-Cycle for low-diameter graphs in near-optimal time and memory

We revisit Min-Mean-Cycle, the classical problem of finding a cycle in a...

A note on the parametric integer programming in the average case: sparsity, proximity, and FPT-algorithms

We consider the Integer Linear Programming (ILP) problem max{c^ x : A x ...

Sufficient Exploration for Convex Q-learning

In recent years there has been a collective research effort to find new ...