The Dual Matrix Algorithm for Linear Programming

12/31/2020 ∙ by Leonid A. Levin, et al. ∙ 0

The Dual Matrix Algorithm, variations of which were proposed in [A.Yu.Levin 65] and [Yamnitsky, Levin 82], has little sensitivity to numerical errors and to the number of inequalities. It offers substantial flexibility and, thus, potential for further development.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 The Idea

Notations.

Let

denote the Euclidean norm of a vector

. For clarity row vectors may be underlined: , column overlined: . is the determinant of a matrix .

is the identity matrix,

are its -th row and column. We are to solve a system of inequalities for a rational vector , given an integer matrix . Our inequalities are for , linearly independent for . Let and all entries of be bits long, so each has bits. By Hahn-Banach Theorem, the system is inconsistent iff there is a non-negative vector . The same holds if .

The DMA searches for in the form , where matrix has no negative entries, , for a diagonal , . Initially, . We must grow to . This growth is hard to keep monotone, so we grow instead a lower bound . It is of the volume of the simplex with faces , , vertices , , and center .

It turns out that by incrementing a single entry of one can always increase by , as long as fails the requirement. This provides an steps algorithm. Each step takes arithmetic operations, on -bit numbers111At some steps may overstretch, resulting in entries with up to digits. But at such steps a large gain in can be achieved, so there would be of them. Indeed, ’s intersection with the original simplex would have a much smaller volume and can be easily enclosed in a new small simplex whose faces are linear combinations of faces of . and one call of a procedure which points an inequality violated by a given solution candidate . This call is the only operation that may depend on the number of inequalities, which could even be an infinite family with an oracle providing the violated .

1.2 Some Comparisons

The above bound has times more steps than Ellipsoid Method (EM). However, the EM is much more demanding with respect to the precision with which the numbers are to be kept. The simplex cannot possibly fail to include all solutions of , , whatever with no negative entries is taken. In contrast, the faithful transformation of ellipsoids in the EM is the only guarantee that they include all solutions.

Also, for several Karmarkar-type algorithms have lower polynomial complexity bounds. Yet, they work in the dual space and their bounds are in terms of the number of inequalities, while the above DMA bound is in terms of . For DMA, may even be infinite, e.g., forming a ball instead of a polygon. Then dual-space complexity bounds break down, while the DMA complexity is not affected (as long as a simple procedure finds a violated inequality for any candidate ).

To assure fast progress, numbers are kept with digits.00footnotemark: 0 This bound cannot be improved since some consistent systems have no solutions with shorter entries. Yet, this or any other precision is not actually required by DMA. Any rounding (or, indeed, any other deviation from the procedure) can be made any time as long as keeps growing, which is immediately observable. This leaves DMA open to a great variety of further developments. In contrast, an inappropriate rounding in the EM, can yield ellipsoids which, while still shrinking fast, lose any relation to the set of solutions and produce a false negative output.

1.3 A Historical Background

The bound is inversely proportional to the volume of , which shows a parallel with the EM. Interestingly, in history this parallel went in the opposite direction: The simplexes enclosing the solutions were first used in [A.Yu.Levin 65] and the EM was developed in [Nemirovsky, Yudin 76] as their easier to analyze variation. The [A.Yu.Levin 65] algorithm started with a large simplex guaranteed to contain all solutions. Its center of gravity

was checked, and, if it failed some inequality, the corresponding hyperplane cuts out a “half” of the simplex. The process repeated with the resulting polyhedron. Each cut decreases the volume by a constant factor and so, after some number

of steps the remaining body can be re-enclosed in a new smaller simplex. Only a weak upper bound was proven in [A.Yu.Levin 65] which did not preclude the simplex growing too complex to manipulate in polynomial time.

[Nemirovsky, Yudin 76] replaced simplexes with ellipsoids and made . Both [A.Yu.Levin 65] and [Nemirovsky, Yudin 76] used real numbers and looked for approximate solutions with a given accuracy. [Khachian 79] modified the EM for rationals and exact solutions. [Yamnitsky, Levin 82] proved for the original [A.Yu.Levin 65] simplex splitting method. Here the algebraic version of the [Yamnitsky, Levin 82] algorithm and its implementation techniques is considered and analyzed.

2 The Algorithm

Recall , for a diagonal , , , , .
For such that , and some let , , for .
Sherman-Morrison formula gives an inverse . with a diagonal .

Now, , thus our gain is for .

Also, .

Thus, for , and .

Taking , and .

So, , and since . Since , we have
, so .

Now, . So, , and our gain is
. For and this is .
This holds if is accurate to digits. Thus can be rounded to significant digits, too.

In or more steps a row of may add positive entries . Then can be simplified to have positive entries in without changing . Namely, in arithmetic operations, of the linearly dependent rows can be replaced with positive combinations of others.

References

  • [A.Yu.Levin 65] A.Yu. Levin. 1965. On an Algorithm for the Minimization of Convex Functions.
    Soviet Mathematics, Doklady, 6:286-290.
  • [Nemirovsky, Yudin 76] D.B. Yudin, A.S. Nemirovsky. 1976.
    Informational Complexity and Effective Methods for Solving Convex Extremum Problems.
    Economica i Mat. Metody, 12(2):128-142; transl. MatEcon 13:3-25.
  • [Khachian 79] L.G. Khachian. 1979. A Polynomial Algorithm for Linear Programming.
    Soviet Mathematics, Doklady, 20(1):191-194.
  • [Yamnitsky, Levin 82] B. Yamnitsky, L.A. Levin. 1982.
    An old linear programming algorithm runs in polynomial time. FOCS-82.