A theory of NP-completeness and ill-conditioning for approximate real computations

03/09/2018 ∙ by Gregorio Malajovich, et al. ∙ CUNY Law School 0

We develop a complexity theory for approximate real computations. We first produce a theory for exact computations but with condition numbers. The input size depends on a condition number, which is not assumed known by the machine. The theory admits deterministic and nondeterministic polynomial time recognizable problems. We prove that P is not NP in this theory if and only if P is not NP in the BSS theory over the reals. Then we develop a theory with weak and strong approximate computations. This theory is intended to model actual numerical computations that are usually performed in floating point arithmetic. It admits classes P and NP and also an NP-complete problem. We relate the P vs NP question in this new theory to the classical P vs NP problem.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

BSS proposed a model of computation over the real and complex numbers and over any ring or field. The initial goal was to provide foundations and a theory of complexity for numerical analysis and scientific computing. They borrowed components from several existing theories, especially algebraic complexity and the complexity theory of theoretical computer science including the versus

problem. The model was essentially a Turing machine in which the entries on the tape would be elements of the real numbers, complex numbers or the other rings or fields in question and the arithmetic operations and comparisons exact. It was intended to capture essential components of computation as employed in numerical analysis and scientific computing in a simple model which would put the theory in contact with additional mainstream areas of mathematics and produce meaningful or even predictive results. The need to incorporate approximations, input and round-off error was mentioned at the time but there has not been much progress in those directions in the intervening years especially as concerns decision problems. In this paper we propose a remedy for this situation.

A good part of the problem concerns how to describe the input error and the round-off error. Here we choose floating point arithmetics. Our model has the following properties which we refer to as the wish list:

  1. The theory admits classes and with .

  2. The class contains the class of classical (Turing) computations of computer science. Moreover it contains decision sets that are related to computations considered easy in numerical practice, such as the complements of graphs of elementary functions. It also contains problems related to standard linear algebra computations and certain fractal sets.

  3. The class contains a complete problem.

  4. Machines supporting the definitions of and never give wrong answers, regardless of the precision.

  5. Numerical stability issues do play a role.

  6. The condition number plays a major role in the theory.

We do not require to be closed by complements, as this seems to preclude other important goals. The condition number used is quite general as the one used by Cucker. This definition emerged from discussions between Cucker and the first author of this paper. This generality is useful as a natural definition of condition as the reciprocal distance to the locus of ill-posedness varies according to the context.

Outline of the paper and main results

We proceed in two steps. First we define classes and of real decision problems with a condition number. Those classes generalize the classes and as in BSS to include condition numbers, but still with exact computations. This will allow us to investigate all the main features of the theory, except for numerical stability. The main result in Part 1 will be

Theorem A.

The following are equivalent:

  1. .

  2. .

This theorem is restated below as Theorem 5.9. In Part 2 we will introduce a model of floating point computations, and classes and . The main results in Part 2 are the existence of an -complete problem and the Theorem below, restated as Theorem 11.1.

Theorem B.

If , then .

Related work

Alan Turing Turing understood that the main obstruction to the efficiency of numerical computations would be loss of accuracy due to iterated round-off errors. He realized that

When we come to make estimates of errors in matrix processes we shall find that the chief factor limiting the accuracy that can be obtained is ‘ill-conditioning’ of the matrices involved.

This motivated the introduction of the condition number as a ‘measure of ill-conditionning’.

Formal models of computability for real functions were developed later. Recursive Analysis is an extension of Turing’s model of computation to Turing machines with infinite tapes for input and output. Those machines can be used to compute maps between topological spaces with a countable basis. The input is a convergent nested sequence of balls from the basis of input space, and the output is the same for output space. This model has the property that only continuous functions can be computable. In particular, decidable sets must be both open and closed.

An attempt to propose a more realistic model of numerical computation was made by BSS. This model admits a real -completeness theory similar to the classical theory by Cook and Karp. Condition numbers and rounding-off are not incorporated into the BSS model. A strong objection against it was raised by Braverman-Cook:

A weakness of the BSS approach as a model of scientific computing is that uncomputability results do not correspond to computing practice in the case .

This objection was further elaborated by Braverman-Yampolsky*p.13:

Algebraic in nature, BSS decidability is not well-suited for the study of fractal objects, such as Julia sets. It turns out (see Chapter 2.3 of (Blum, Cucker, Shub, and Smale, 1998)) that sets with a fractional Hausdorff dimnension, including ones with very simple description, such as Cantor set and the Koch snowflake (…), are BSS-undecidable. Morevoer, due to the algebraic nature of the model, very simple sets that do not decompose into a countable union of semi-algebraic sets are not decidable. An example of such a set is the graph of the function (…)

They proposed instead a theory of sets recognizable in polynomial time, but without an -completeness theory. The class that we propose borrows from the idea of measuring the cost of recognizing a set (or its complement), instead of the cost of deciding it.

A first tentative to endow the BSS model with condition numbers and approximate computations is due to Cucker-Smale. They studied specifically algorithms for deciding semi-algebraic sets under an absolute error model for numerical computations, but no reduction theory was developed.

Cucker*Remark 7 defined a model of numerical computation with condition numbers similar to the one in this paper. In his model a problem is a pair where and the notation stands for the disjoint union of all , . The condition number is an arbitrary function. We will retain this definition. The size of an input is its length plus . We will use a similar definition (length times ) that preserves the polynomial hierarchy. A machine in Cucker’s model is a BSS machine over modified so that all computations are approximate, and for instance produces a real number so that

The number is known to the machine. The cost of a computations is times the number of arithmetic operations and branches. Several classes are defined. The classes and allow for a machine to give a wrong answer if the precision is not small enough. This fails one of our main wishes in this paper. An example of undesirable consequences is a constant time algorithm to decide the problem where is Cantor’s middle thirds set and is the natural condition number (see example 3.6). Cucker also defined a class where machines are not supposed to give wrong answers. Those are classes and . Quoting from Cucker*Sec.5.4,

(…) Similarities with the development in the preceding paragraph, however, appear to stop here, as we see as unlikely the existence of complete problems in either or . This is so because the property characterizing problems in – the fact that any computation with a given ‘measures its error’ to ensure that outputs in are correct – does not appear to be checkable in . That is, we do not know how to check in that, given a circuit outputting values in a point , and a real , all -evaluations of with input return the same value (…)

Our definition of classes and will ensure that approximate computations can be certified. This will allow for the existence of -complete problems.

Acknowledgements

We would like to thank Felipe Cucker and Marc Braverman for conversation and insight. Two anonymous referees significantly helped us to clarify some results and to improve the presentation of this paper.

2. BSS machines

In this paper, a machine is always a BSS machine over a ring as in Blum et al. BCSS. More precisely, BSS machines over are machines over the field of real numbers with division and BSS machines over are machines over the finite field with two elements. The complexity theory over is known to be equivalent to Turing complexity with respect to polynomial time. Following BCSS, is the disjoint union of all the , and is the class of bi-infinite sequences with but for a finite number of . The input and output spaces of the machine are but the state space is . The machine is assumed to be in a particular canonical form. This means that each node performs at most one arithmetical operation. For later reference, we formally define:

Definition 2.1.

A machine is a BSS machine over the reals in canonical form:

  1. The input node maps input into state

    where the dot is placed at the right of the zero-th coordinate in .

  2. The output node maps state into output , with , .

  3. All branching tests are of the form .

  4. The map associated to a computation node is one of the following:

  5. Constants are assumed to be real numbers.

  6. To each ‘fifth-node’ is associated a map or : where and

  7. There is only one output node.

  8. Each division is preceded by tests and . In case both tests fail, does not change and the next node is the actual node itself, so the machine never terminates.

Every BSS machine can be replaced by a machine in canonical form, at a cost of a linear increase in the number of nodes and in the running time. A machine in canonical form can be described by the number of nodes, and by a list of maps associated to each node. Let , , and . The letter denotes the current node number and the table below gives the next-node map and the next-state map . Below, and denote the input and output maps.

Node type of . Associated maps Input node () Computation node . Branch node Output nodes (). Outputs . Fifth node where ,

Definition 2.2.

An exact computation is a sequence in satisfying , and for all ,

The computation terminates if eventually, and the execution time is the smallest of such . The terminating computation is said to accept input if and to reject otherwise. The input-output map is the map and the halting set of is the domain of definition of the input-output map.

Remark 2.3.

For input of length , implies . Also, implies .

A Universal Machine over (actually over any ring) was constructed by BSS*Sec.8. Any machine over can be described by a ‘program’ so that . This holds whenever either of the two machines stops. Moreover if any of these computations finishes in finite time, the running time for the universal machine is polynomially bounded in terms of the running time for the original machine.

The result below will be used later. It is a trivial modification of the argument proving the existance of the Universal Machine. Let denote the time- halting set of a machine .

Proposition 2.4.

There is a Machine over with real constants and with the following property: for any machine over , for any input ,

Moreover, the running time is polynomially bounded in terms of the running time .

3. Polynomial time

Recall that the input-output map for a machine with input is denoted by and the running time (number of steps) with input is denoted by . The length of an input is . We define the size of an instance of by . In this paper, we make the convention that an output means YES and an output means NO.

In Turing and BSS complexity, the input size is its length which is known. Therefore the two following definitions of the class of polynomial time decision problems are equivalent:

‘One-sided’ :

iff there is a polynomial and a machine such that for any the machine with input halts in time at most and outputs a positive number, and for the machine does not halt.

‘Two-sided’ :

iff there is a polynomial and a machine such that for any , the machine with input halts in time at most and the output satisfies

The delicate part of the argument for proving equivalence is the construction of the machine given the machine . This is done in Proposition 2.4 by introducing a ‘timer’ and halting with a NO (negative) answer when the time is larger than .

Our model is different because the input size depends on the condition, which is not assumed to be known. Braverman-Yampolsky already explored the idea of ‘one-sided’ (actually ) in their computer model, see Example 3.10. They assumed the condition of accepted inputs can be bounded conveniently. We do not make that explicit convention but all of our examples admit an estimator. We start with the one-sided definition of .

Definition 3.1 (Deterministic polynomial time).

The class (reads one-sided ) of problems recognizable in polynomial time is the set of all pairs so that there is a BSS machine over with input , output in and with the following properties:

  1. There is a polynomial such that whenever and ,

  2. If , then

The last condition implies in particular that for finite or infinite input size if an answer is given, it is correct.

Definition 3.2.

The class (reads two-sided ) of problems decidable in polynomial time is the set of all pairs so that and are both in .

In this sense,

Equivalently, one can remove the clause ‘whenever ’ from Definition 3.1(a). Notice also that a problem in (resp. ) ‘projects’ in setting for (resp. ).

Example 3.3.

Let . if and only if .

Example 3.4.

For any , .

The examples above are trivial. Below is a more instructive one. It is known that so that . If we plug in the correct condition number, then the ‘bits’ of can be found in polynomial time for all input .

Example 3.5.

Let . Then, .

Proof.

We consider the machine described by the following pseudo-code:

  • Input .

  • If then .

  • .

  • While ,

    .

  • While ,

    If then .

  • If then output else output .

When each of the while loops will be executed at most times. At the end of the second while, , and is not changed after the first if. If the loops will not be executed at all, and the only possibly accepted input is . ∎

Figure 1. Construction of the middle-thirds Cantor set.
Example 3.6.

Let be the Cantor middle thirds set (Fig.1),

It can also be constructed as , with , where the tent map is

Clearly, is not a countable union of disjoint points and intervals and therefore it is not BSS computable. Membership to can be verified by iterating the tent map. The condition number for the Cantor set is defined as

where is the usual distance. Then is infinite in and finite in . Moreover, for , we have always so and

Since iterates are sufficient to check that , it follows that . If then so we also have .

A small modification of the example gives us a sharp separation:

Proposition 3.7.
Proof.

Define for and otherwise. We claim that . Otherwise, the decision machine would be supposed to recognize in constant time. This is impossible since is not a countable union of points and intervals. ∎

(b)(a)(c)(d)

Figure 2. Koch’s snowflake: (a) General view, (b) construction: the mapping , (c) approximation of the distance from an interior point to the border: for , the approximation is the distance from to the segment . For , the distance can be approximated by the distance to or . For points on the red triangles, use self-similarity. (d) The distance from an exterior point to the border can be approximated by the distance to except for the colored triangle. For points in the triangle, use self-similarity.
Example 3.8.

The Koch snowflake from Fig. 2(a) can be treated in a similar way. To simplify the presentation we will check computability of the region delimited by a Koch curve inside an equilateral triangle. Namely, let and let be the solid triangle , that is the convex hull of , , . Subdivide as in Fig.2(b) and consider a piecewise linear map which is continuous in each subdivision, and maps each of , , , as in the picture, is undefined in and in the remaining regions. Then define and inductively, . The piece of snowflake is .

We set . Since multiplies distances by 3, it takes at most iterations of to decide if . This condition number is geometrically appealing. Another important property that we do not require in our model (but see Ex. 3.10) is the capacity of estimating the condition number a posteriori. In this example, estimating the distance to is easy. First assume that . Let so that . Assume without loss of generality that the imaginary part of is non-negative. Then approximate the distance as in Fig.2(c). For points outside , iterate until leaves and then estimate the distance as in Fig.2(d).

Example 3.9.

The epigraph of the exponential, with condition number , is in . The supporting algorithm for this problem was described by  Brent in the context of floating point computations: the cost of computing in a given interval, say , with absolute accuracy is where is the cost of multiplication (Th.6.1). Since we are using a model that allows for exact computations, the cost of the very same algorithm becomes . However, this bound is valid only for in the interval.

In order to extend the result to the reals, assume first that and write with and or with . Then . This means that we should compute with accuracy at a cost of . For , we just compare to as above (no extra accuracy is needed for the inverse) The following pseudo-code summarizes the procedure:

  • Input .

  • If ,

    then

    else , , .

  • , .

  • While , do , .

  • Repeat

    Apply Brent’s algorithm to compute with accuracy .

    Compute by repeated squaring.

    Compute by repeated squaring.

    If then output .

    If then output .

    .

Example 3.10.

A notion of weakly computable set of was explored by  Braverman-Yampolsky. In their model a point is represented by an ‘oracle’ function that given , produces a -approximation with cost . A set is (weakly) computable if there is an oracle Turing machine that given a point (represented by an oracle) and given ,

  • answers 1 (true) if ,

  • answers 0 (false) if ,

  • answers 0 or 1 otherwise.

Above, is the Euclidean distance. If the answer is we can infer that . An answer of is inconclusive.

A set can be computed in polynomial time if and only if the machine terminates in time polynomial in . Examples of computable polynomial time Julia sets are given in their book (for instance, Th 3.4 p.42 ).

Given any closed, bounded and weakly computable set , we may define the condition number for , where is minimal so that with input returns . Of course, for . Then . Indeed, given any , it is easy in the BSS model to obtain diadic approximations of in polynomial time. This replaces the oracle. Then we can simulate the Turing machine within the same time bound.

Because we assumed that is closed, implies that . Taking already guarantees that the machine with input answers .

Remark 3.11.

A more natural definition for the example above would be , and we would have anyway. This condition number can possibly be much larger than the original one.

Example 3.12 (canonical condition).

This is a generalization of the condition number of Example 3.10. Let be an arbitrary BSS machine over . Let be the time- halting set of . Let be the halting set of . Then, define

Then is in . We will call the canonical condition associated to a machine . Given any machine accepting , one always have . This is exactly the same trick as increasing the size of an input instance in discrete computability theory.

The previous examples of problems admit an estimator for when up to a bounded relative error.

More generally, for machines not necessarily solving a decision problem, a function can be estimated in polynomial time (with respect to some input size) if and only if given , there is a machine that produces an -approximation of in time polynomial in the input size and . For instance, the -th root of for can be approximated in polynomial time. An easy modification of our previous example yields:

Theorem 3.13.

Let . Then there is so that and can be estimated in polynomial time with respect to the input size for .

Proof.

Let be the machine in Definition 3.1. There are so that for ,

We define by solving the equation

that is

so that and is decided by the same machine . ∎

4. Non-deterministic polynomial time

Definition 4.1 (Non-deterministic polynomial time).

The class of problems recognizable in non-deterministic polynomial time is the set of all pairs so that there is a BSS machine over with input , output in and with the following properties:

  1. There is a polynomial such that whenever and , there is such that and

  2. If and , then .

The possibility of rejecting an unlucky guess for is irrelevant and we can replace the machine in the definition by a machine that can either work forever or accept the input. Clearly, . A few trivial examples in are:

Example 4.2.

Let . if and only if .

Example 4.3.

For any , Then .

The following problem is in :

Example 4.4.

Let SA-Feas (Semi-Algebraic Feasibility) be the set of all where and is a system of real polynomial equations in variables codified in sparse representation, such that there is some with (coordinatewise). The problem SA-Feas belongs to because a non-deterministic machine can guess and compute [BCSS]*Prop.3 p.103. In particular .

Example 4.5.

A modification of the previous example: Let SAE-Feas (Semi-Algebraic-Exponential Feasibility) be the set of all where and codifies a system of real polynomial equations in variables such that there is some with (coordinatewise). The machine will compute each approximately up to relative error . There will be for each in the yes set a best guess and a value of such that -approximations of are sufficient to infer that . The condition number for such will be deemed to be .

Example 4.6.

This is a basic geometric example of problem in . For definition and references on the geometric concepts, we recommend the textbook by Berger. Let be a smooth algebraic variety with maximal principal curvature . Let denote the Euclidean distance in , while is the distance along ,

Let be such that admits a -tubular neighborhood. This means that the neighborhood is diffeomorphic to the normal bundle of .

Let be a fixed point of and let . Then define

We claim that .

Proof.

The supporting algorithm is as follows:

  • Input .

  • , .

  • If or some then output -1.

  • If then output -1.

  • For ,

    If then output -1.

  • .

  • Output .

Before proving correctness of the algorithm, we notice the following fact: if is a minimizing geodesic, then

(1)

The upper bound above follows from the triangle inequality. To establish the lower bound on (1), we write

Then we use , and triangular inequality. Now assume that . Pick

(2)

and set . By the Hopf-Rinow theorem [Berger]*th.52, there is a minimizing geodesic between and . Set . With those choices, the algorithm computes . Upon acceptation, equation (1) yields

where the last inequality is a consequence of (2) and of the choice of . The following estimate for also follows:

Remark 4.7.

In the example above, the subset is contained in the connected component containing . By taking as an extra input to the supporting algorithm, one can also deduce that is in .

Remark 4.8.

In many problems of interest, the reciprocal of the condition number is equal or related to the distance to the set of degenerate inputs. The choice of the metric depends usually on the problem one wants to solve, and there may be several workable choices. In the context of this paper, one can make the subset of inputs with finite or infinite condition into a pseudo-metric space by defining . This setting has the inconvenience of attibuting distance zero to different problems with the same condition. Another possibility is setting

which provides an inequalty .

Remark 4.9.

A path metric space is a metric space where the distance between two points is the infimum of the length of the curves between those two points. For instance, Riemannian manifolds are path metric spaces.

A necessary and sufficient condition for a complete metric space to be a path metric space is that for arbitrary points and for each , there is such that

[Gromov]*Th.1.8 p.7. It may be possible to generalize the example above to other path metric spaces. A common situation is to have a subset (e.g. a manifold) embedded into another metric space (e.g. or ). The subset inherits the metric of the ambient space, but we can also define a path metric along .

In general, but it is hard to obtain upper bounds for  [Gromov]*Sec.1.15. The example above seems easier to generalize when such upper bounds are available.

5. -completeness

NP-hardness will be defined through one-sided Turing reductions. Informally, a Turing reduction from a problem to a problem is a BSS machine for that is also allowed to repeatedly query a machine for . This reduction is said to be a polynomial time reduction if and only if, for all the machine for runs in polynomial time, and produces polynomially many queries to the machine for , and the size of each query is polynomially bounded in the input size . This definition ensures that given a polynomial time Turing reduction, implies .

Remark 5.1 (Many-one vs Turing reductions).

A stricter notion of reducibility was used by BCSS*Sections 5.3, 5.4 through the use of p-morphisms. In their definition, the reduction machine can call the machine for only once. This is also called a many-one reduction. The reduction of a general problem to the canonical -complete problem goes by a reduction to the register equations/inequations up to time . This value of can be bounded polynomially in terms of the input size, which is known in the BSS model. In this paper the input size depends on the condition which is not assumed known. Hence the p-morphism based argument fails, and we need to consider Turing reductions instead.

Definition 5.2.

A BSS machine with a black box for (black box machine for short) is a BSS machine over with an extra node , the black box node. It has one outgoing edge .

The black box can be thought as a subroutine to compute a certain arbitrary function . This subroutine will have to satisfy certain properties regarding correctness and time cost. In modern programming this sort of routine is called an abstract method while in traditional computer science it is called an oracle.

When the black box node is attained (say at time ), it interprets a fixed set of the state variables as an input of the form to the ‘subroutine’. If and the black box node will replace with a positive output . If the black box produces a positive output given , then . The black box always produces an answer, and if it may ‘time out’ and replace by a negative value. A more precise description can be given through the register equations.

In classical complexity theory, black boxes are assumed to give an answer in unit time. The total computation time of a black box machine with input includes the time for preparing each black box input , so is still a lower bound for the running time. Whence, is polynomially bounded with respect to . If the black box is replaced by a routine that runs in time polynomial in , the total running time will be polynomial in . In this paper, is not known at the time of the query, so the same trick is not available any more. This forces us to depart from the classical complexity theory and to assume instead that black boxes answer in a certain prescribed time . There are no false positives, but the black box may fail to accept if .

Definition 5.3.

The register equations for a machine with a black box for the problem are the same as the register equations for , except when the black box node is attained. If is first attained at and the machine is at internal state , then

The computation for a machine with a black box and a given input is just the output associated to the register equations. While the size of each query to the black box is not assumed to be known, one strategy to build such a machine is to keep doubling the bound until the input is hopefully accepted.

Definition 5.4.

A polynomial time (one sided) Turing reduction from to is a BSS machine over with a black box for