The Dual Matrix Algorithm for Linear Programming

The Dual Matrix Algorithm, variations of which were proposed in [A.Yu.Levin 65] and [Yamnitsky, Levin 82], has little sensitivity to numerical errors and to the number of inequalities. It offers substantial flexibility and, thus, potential for further development.

Authors

• 3 publications
• A Fast Polynomial-time Primal-Dual Projection Algorithm for Linear Programming

Traditionally, there are several polynomial algorithms for linear progra...
10/10/2018 ∙ by Zhize Li, et al. ∙ 0

• FRaGenLP: A Generator of Random Linear Programming Problems for Cluster Computing Systems

The article presents and evaluates a scalable FRaGenLP algorithm for gen...
05/18/2021 ∙ by Leonid B. Sokolinsky, et al. ∙ 0

• A simple introduction to Karmarkar's Algorithm for Linear Programming

An extremely simple, description of Karmarkar's algorithm with very few ...
12/22/2017 ∙ by Sanjeev Saxena, et al. ∙ 0

• Variance Reduction for Matrix Games

We present a randomized primal-dual algorithm that solves the problem _x...
07/03/2019 ∙ by Yair Carmon, et al. ∙ 0

• Online Linear Programming: Dual Convergence, New Algorithms, and Regret Bounds

We study an online linear programming (OLP) problem under a random input...
09/12/2019 ∙ by Xiaocheng Li, et al. ∙ 0

• A Dual Ascent Framework for Lagrangean Decomposition of Combinatorial Problems

We propose a general dual ascent framework for Lagrangean decomposition ...
12/16/2016 ∙ by Paul Swoboda, et al. ∙ 0

• Improved linear programming methods for checking avoiding sure loss

We review the simplex method and two interior-point methods (the affine ...
08/09/2018 ∙ by Nawapon Nakharutai, et al. ∙ 0

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 The Idea

Notations.

Let

denote the Euclidean norm of a vector

. For clarity row vectors may be underlined: , column overlined: . is the determinant of a matrix .

is the identity matrix,

are its -th row and column. We are to solve a system of inequalities for a rational vector , given an integer matrix . Our inequalities are for , linearly independent for . Let and all entries of be bits long, so each has bits. By Hahn-Banach Theorem, the system is inconsistent iff there is a non-negative vector . The same holds if .

The DMA searches for in the form , where matrix has no negative entries, , for a diagonal , . Initially, . We must grow to . This growth is hard to keep monotone, so we grow instead a lower bound . It is of the volume of the simplex with faces , , vertices , , and center .

It turns out that by incrementing a single entry of one can always increase by , as long as fails the requirement. This provides an steps algorithm. Each step takes arithmetic operations, on -bit numbers111At some steps may overstretch, resulting in entries with up to digits. But at such steps a large gain in can be achieved, so there would be of them. Indeed, ’s intersection with the original simplex would have a much smaller volume and can be easily enclosed in a new small simplex whose faces are linear combinations of faces of . and one call of a procedure which points an inequality violated by a given solution candidate . This call is the only operation that may depend on the number of inequalities, which could even be an infinite family with an oracle providing the violated .

1.2 Some Comparisons

The above bound has times more steps than Ellipsoid Method (EM). However, the EM is much more demanding with respect to the precision with which the numbers are to be kept. The simplex cannot possibly fail to include all solutions of , , whatever with no negative entries is taken. In contrast, the faithful transformation of ellipsoids in the EM is the only guarantee that they include all solutions.

Also, for several Karmarkar-type algorithms have lower polynomial complexity bounds. Yet, they work in the dual space and their bounds are in terms of the number of inequalities, while the above DMA bound is in terms of . For DMA, may even be infinite, e.g., forming a ball instead of a polygon. Then dual-space complexity bounds break down, while the DMA complexity is not affected (as long as a simple procedure finds a violated inequality for any candidate ).

To assure fast progress, numbers are kept with digits.00footnotemark: 0 This bound cannot be improved since some consistent systems have no solutions with shorter entries. Yet, this or any other precision is not actually required by DMA. Any rounding (or, indeed, any other deviation from the procedure) can be made any time as long as keeps growing, which is immediately observable. This leaves DMA open to a great variety of further developments. In contrast, an inappropriate rounding in the EM, can yield ellipsoids which, while still shrinking fast, lose any relation to the set of solutions and produce a false negative output.

1.3 A Historical Background

The bound is inversely proportional to the volume of , which shows a parallel with the EM. Interestingly, in history this parallel went in the opposite direction: The simplexes enclosing the solutions were first used in [A.Yu.Levin 65] and the EM was developed in [Nemirovsky, Yudin 76] as their easier to analyze variation. The [A.Yu.Levin 65] algorithm started with a large simplex guaranteed to contain all solutions. Its center of gravity

was checked, and, if it failed some inequality, the corresponding hyperplane cuts out a “half” of the simplex. The process repeated with the resulting polyhedron. Each cut decreases the volume by a constant factor and so, after some number

of steps the remaining body can be re-enclosed in a new smaller simplex. Only a weak upper bound was proven in [A.Yu.Levin 65] which did not preclude the simplex growing too complex to manipulate in polynomial time.

[Nemirovsky, Yudin 76] replaced simplexes with ellipsoids and made . Both [A.Yu.Levin 65] and [Nemirovsky, Yudin 76] used real numbers and looked for approximate solutions with a given accuracy. [Khachian 79] modified the EM for rationals and exact solutions. [Yamnitsky, Levin 82] proved for the original [A.Yu.Levin 65] simplex splitting method. Here the algebraic version of the [Yamnitsky, Levin 82] algorithm and its implementation techniques is considered and analyzed.

2 The Algorithm

Recall , for a diagonal , , , , .
For such that , and some let , , for .
Sherman-Morrison formula gives an inverse . with a diagonal .

Now, , thus our gain is for .

Also, .

Thus, for , and .

Taking , and .

So, , and since . Since , we have
, so .

Now, . So, , and our gain is
. For and this is .
This holds if is accurate to digits. Thus can be rounded to significant digits, too.

In or more steps a row of may add positive entries . Then can be simplified to have positive entries in without changing . Namely, in arithmetic operations, of the linearly dependent rows can be replaced with positive combinations of others.

References

• [A.Yu.Levin 65] A.Yu. Levin. 1965. On an Algorithm for the Minimization of Convex Functions.