Consider the integration and maximisation functionals on the space of univariate continuous functions over the compact interval :
Both functionals constitute fundamental basic operations in numerical mathematics. They are considered to be easy to compute for functions that occur in practice. It was hence surprising that when Ko and Friedman  introduced a rigorous formalisation of computational complexity in real analysis and analysed the computational complexity of these functionals within this model, they found that both problems are computationally hard in a well-defined sense. They constructed an infinitely differentiable polytime computable function such that the function is again polytime computable if and only if , and the function is again polytime computable if and only if . Moreover, the real number is polytime computable if and only if , and the number is again polytime computable if and only if .
This obvious discrepancy between practical observations and theoretical predictions deserves further discussion. We will focus on two possible explanations for this observation:
Accuracy of results. Hardness in the theoretical results refers to how hard it is to compute the values of the function to an arbitrary accuracy. An algorithm for computing a real function takes as input a real number , encoded as an oracle, and a natural number , encoded in unary, and is required to output an approximation to to accuracy . The running time of the algorithm is a function of which measures the number of steps the algorithm takes. By contrast, practitioners usually work at a fixed floating-point precision, which implies a fixed maximum accuracy. It hence may not be justified to measure the complexity in the output accuracy, and other complexity parameters should be considered more important. In fact, if one relaxes the definition of polytime computability such that on input and an algorithm has to produce an approximation to to accuracy , then the range and integral of every polytime computable function are polytime computable. So maybe the theoretical infeasibility of these functionals is an artefact of poorly chosen normalisation.
Representation of functions. Theoreticians use a simple representation (which we call ) that treats all continuous functions equally, in the sense that a function is polynomial time computable if and only if it has a polynomial time computable -name. Practitioners, on the other hand, tend to work on a much more restricted class of functions. They tend to work with functions which are given symbolically or which can be approximated well by certain kinds of (piece-wise) polynomial or rational functions. As not every polynomial time computable function can be approximated by polynomials or rational functions in polynomial time, the implicit underlying representations favour a certain class of functions, for which it is easier to compute integral and range.
The aim of this paper is to discuss these different explanations both from a theoretical and a practical perspective and to resolve the apparent contradiction between the theoretical hardness results and practical observations. To this end we study the computational complexity of the maximisation and integration functionals with respect to various representations of continuous real functions within the uniform framework of second-order complexity theory, introduced by Kawamura and Cook , and compare the practical performance of algorithms which use these representations on a small family of benchmark problems.
Classes of feasibly approximable functions.
The dependency of the complexity of integration and maximisation on the choice of representation has been studied by various authors: Müller  showed that if is a polytime analytic function, then the function is again polytime (and analytic), and the function is again polytime (but not differentiable in general). This result was generalised by Labhalla, Lombardi, and Moutai  to the strictly larger class of polytime functions in Gevrey’s hierarchy, a class of infinitely differentiable functions whose derivatives satisfy certain growth conditions. These functions are characterised in  as those functions which can be approximated by a polynomial time computable fast converging Cauchy sequence of polynomials with dyadic rational coefficients. It is also shown that integral and maximum of a function are uniformly polytime computable from such a sequence. These results were slightly strengthened and refined in various ways by Kawamura, Müller, Rösnick, and Ziegler  who studied the uniform complexity of maximisation and integration for analytic functions and functions in Gevrey’s hierarchy in dependence on certain parameters which control the growth of the derivatives or the proximity of singularities in the complex plane.
While these results already show that maximisation and integration are polytime computable for a large class of practically relevant functions, there are many practically relevant functions which are not contained in the class of infinitely differentiable functions with well-behaved derivatives:
For applications in control theory it is often necessary to work with functions which are constructed from smooth functions by means of pointwise minimisation or maximisation, and thus differentiability is usually lost.
It can also be shown that the class of polytime computable functions in Gevrey’s hierarchy is not uniformly polytime computably closed under division by functions which are uniformly bounded by from below.
Also, while for any polytime computable in Gevrey’s hierarchy, the function is again polytime computable, it is in general no longer smooth. Thus, assuming , the question arises whether is easy to maximise and, more generally, whether every function which is obtained from a polytime computable function in Gevrey’s hierarchy by repeatedly applying the parametric maximisation operator is polytime computable.
One of our main contributions is to identify a larger class of feasibly approximable functions which supports polytime integration and maximisation and is closed under a larger set of operations, including division and pairwise and parametric maximisation.
Compositional evaluation strategies.
In practice, functions of interest are usually constructed from a small set of (typically analytic) basic functions by means of certain algebraic operations, such as arithmetic operations, taking primitives, or taking pointwise maxima. In other words, most functions of practical interest can be expressed symbolically as terms in a certain language. Our main observation is that there is such a language which is rich enough to arguably contain the majority of functions of practical interest, yet restrictive enough to ensure that all functions which are expressible in this language admit uniformly polytime computable integral, maximum, and evaluation.
To make this claim precise, we introduce the notion of “compositional evaluation strategy” for a structure . To motivate this notion, consider how a user might specify a computational problem involving real numbers and functions. Typically, the user will specify the problem symbolically as a term in a certain language and the end result will be a real number which is expected to be produced to a certain accuracy. A library for exact real computation will translate the symbolic representation of the inputs into some internal representation, the details of which will be irrelevant to the user. It will operate on the internal representations — usually in a modular, compositional manner — to eventually produce a name of a real number in the standard representation, which can be queried for approximations to an arbitrary accuracy. Thus, there are certain types, such as real numbers in this example, whose representation is relevant to the user, as the user is interested in querying information about them according to a certain protocol, and other types, such as real functions in this example, which are only used internally and whose internal representation can be freely chosen by the library.
The structures we consider consist of:
Fixed spaces: A class of topological spaces with a given representation. These kinds of spaces correspond to the kinds of objects which are to be used, among other things, as inputs and outputs, so that the kind of information we can obtain on them is fixed.
Free spaces: A class of topological spaces without any given representation. These kinds of spaces correspond to the types of intermediate results, whose internal representation is irrelevant to the user.
A set of constants and operations on these spaces.
A compositional evaluation strategy provides representations for the free spaces in and algorithms, in terms of these representations, for all constants and operations in . It allows us to evaluate a term in the signature of by applying the algorithms in a compositional manner. We can compare different evaluation strategies in terms of which constants and operations they render polytime computable. This partial order induces a partial order on representations, which takes into account both the complexity of constructing names and the complexity of extracting information from names.
We study various Cauchy representations of the space based on polynomial and rational approximations and study their relationship in terms of polytime reducibility. We show that the representation based on rational approximations is polytime equivalent to the representation based on piecewise polynomial approximations (Corollary 22). This result helps us prove that the class of functions which are representable by polynomial time computable fast converging Cauchy sequences of piecewise polynomials is uniformly closed under all operations which are typically used in computing to construct more complicated functions from simpler ones.
In particular, we give a compositional evaluation strategy that uses the representation based on rigorous approximation by piecewise polynomials, which is optimal amongst all strategies for the structure whose constants are the functions in Gevrey’s hierarchy and whose operations include evaluation, range computation, integration, arithmetic operations (including division), pointwise and parametric maximisation, anti-differentiation, composition, square roots, and strong limits. Furthermore, this strategy evaluates every term in the signature of this structure whose leaves are polytime computable Gevrey functions in polynomial time (Corollary 29).
Whilst in the discrete setting the link between polytime computability and practical feasibility is - up to the usual caveats - well established and confirmed by countless examples of practical implementations, there has been, to our knowledge, little to no work to link the somewhat more controversial model of second order complexity in analysis with practical implementation. Thus, in order to demonstrate the relevance of our theoretical results to practical computation, we have implemented compositional evaluation strategies based on the aforementioned representations for a small fragment of the aforementioned structure within AERN2, a Haskell library for exact real number computation. We observed that for the most part the benchmark results fit our theoretical predictions quite well. Our separation results translate to big differences in practical performance, which can be observed even for moderate accuracies. This seems to suggest that the infeasibility of maximisation and integration with respect to the “standard representation” of real functions is not a mere normalisation issue, and that the differences between theoretical predictions and practical observations are really due to the choice of representation. The proofs which establish polytime computability translate to algorithms which seem to be practically feasible, at least up to some common sense optimisations.
2 The Computational Model
Here we briefly review the basic aspects of the theory of computation with continuous data in the tradition of computable analysis, as well as the basics of second-order complexity theory. For background on computability in analysis seee.g., [19, 17, 21, 18]. Second-order computational complexity for computable analysis was developed in , building on ideas from [10, 9].
Let . Let denote Baire space 111In Computable Analysis it is more common to use the computably isomorphic space of functions on the natural numbers, but this choice is of course inconsequential.. A partial function is called computable
if there exists an oracle Turing machinewhich on input with oracle computes . Sometimes, to emphasize the distinction, we will refer to as the “input string” and to as the “input oracle” to .
A represented space consists of a set together with a partial surjection called the representation. We will usually write for if is clear from the context. A partial multi-valued function between represented spaces and is just a relation on the underlying sets. We write and . If and are partial multi-valued functions, then their composition is the partial multi-valued function with and . If and are represented spaces, and is a partial multi-valued function, we call a realiser of if and for all . The map is called computable if it has a computable realiser. The composition of computable partial multi-valued functions is again computable. If carries a topology then is called admissible for if is continuous and every continuous map factors through via some , i.e., . One can show that if and are represented spaces and their respective representations are admissible for topologies on and , then a partial function is sequentially continuous with respect to these representations if and only if it is computable relative to some oracle. It was shown by Matthias Schröder [19, 20] that the class of represented spaces which admit an admissible representation are precisely the -spaces: quotients of countably based spaces. The spaces with (sequentially) continuous total functions form a Cartesian closed category. For further details see .
Let us now turn to computational complexity, following the ideas of Kawamura and Cook . A string function is called length-monotone if
for all . If is a length-monotone function, we define its size via
Note that length-monotonicity implies that whenever , which justifies the seemingly arbitrary choice of the string in the definition of the size. Let denote the set of length-monotone string functions. Note that there is a computable retraction of onto , so that computability theory remains unaffected by replacing with . Thus, a mapping is computable if there is an oracle Turing machine which on input oracle , and input string outputs . The mapping is computable in time , if there is such a machine which outputs within time .
We now introduce the class of “feasibly computable functions” within this setting. The set of second-order polynomials is defined inductively as follows:
and are second-order polynomials.
If and are second-order polynomials then so are , , and .
A partial mapping is called polytime computable if is computable in time for some second-order polynomial . The class of total second-order polytime computable functions coincides with the class of basic feasible functionals .
These notions translate to represented spaces in the usual way: A point is polytime computable if and only if it has a polytime computable name. A partial multi-valued function is polytime computable if and only if it has a polytime computable -realiser. The composition of polytime computable functions is again a polytime computable function. If is a represented space with representations and we say that reduces to (in polynomial time) and write if the identity on is polytime -computable. If and then we say that and are (polytime) equivalent and write .
We will need to introduce canonical representations of finite and countable products. Let be a finite family of representations where . Our goal is to define the product representation Encode the numbers in binary with a fixed number of digits () and denote the resulting strings by . If are length-monotone functions for , define the montone function
Extend this function to all of by letting if and , where , if . Now define the representation as follows:
In order to define countable products, consider a sequence of monotone string-functions. Define the monotone function
Extend this to a total function similarly as in the finite case and define the representation
Finally, let us give some concrete examples of represented spaces that we will use in the rest of the paper. Countable discrete spaces such as the space of natural numbers , the space of dyadic rationals , or the space of rationals are represented via standard numberings, e.g., . By identifying with , we can view such numberings as as maps , which allows us to introduce representations such as , where . As a more interesting example, consider the space of real numbers. Let with and . Using the canonical product construction, we obtain a representation of .
In this paper we will exclusively work over compact intervals of reals. In this case one can avoid the use of second-order complexity bounds by restricting the representation to a compact domain. If is any real number, then there exists a dyadic approximation to error which uses at most bits. Hence, the interval admits a representation with . This is a general phenomenon for compact spaces. It is worth noting that we can restrict in a similar way to obtain a representation of all of , where every name of is bounded by , so that we can bound the running time of an algorithm in terms of the output accuracy and the single number alone, without having to resort to general second-order bounds.
3 Compositional Evaluation Strategies
In this section we introduce the notion of compositional evaluation strategy over an algebraic structure . This will allow us to state our main result on the polytime computability of maximisation and integration for all functions which can be expressed symbolically in a sufficiently simple language. We will also introduce new ways of comparing representations which are interesting in their own right.
For a class of spaces , let denote the class of all finite and countable products of members of , i.e., a space belongs to if and only if it is of the form or with being members of .
Consider structures of the form
is a set of represented spaces , containing at least the space of natural numbers with the standard representation induced by the binary notation.
is a set of -spaces.
is a set of partial multi-valued operations of the form where .
is a subset of the disjoint union of all spaces in .
The set is called the set of fixed spaces, the set is called the set of free spaces, the set is called the set of operations and the set is called the set of constants. An operation of the type will be called an -ary operation. An -ary operation will also be called an -ary operation for short.
A constant where will be called a constant of type and we write . For every we introduce a countable set of free variables of type . A term over the signature of is defined inductively as follows:
Every free variable of type is a term of type .
Every constant of type is a term of type .
If and are terms, then is a term of type .
If is a term of type with a free variable of type then is a term of type .
If is a term and is an operation, then is a term of type .
A term is called closed if it contains no free variables. We denote the set of closed terms of by . If is a closed term we denote by the set of elements of which it represents under the obvious semantics222 The application of a partial operation could lead to the semantics of a term to be undefined. It is however straightforward to define (inductively) what it means for a term to be well-defined, and we will henceforth assume that all terms are well-defined.. A term is called semi-closed if it contains no free variables of free space type. We denote the set of semi-closed terms of by . If are the free variables in , then on the semantic side defines an operation
Suppose we are given a structure . A compositional evaluation strategy for consist of:
For every free space of a representation .
A subset of the set of operations of , and for each operation in , an algorithm which computes a -realiser of .
A subset of the set of constants of , and for each constant in an algorithm which computes a -name of .
For a compositional evaluation strategy , we call a term -well-defined if is defined and the term contains no operations outside and no constants outside .
A compositional evaluation strategy defines a map
which sends an -well-defined closed term of type to a point with . We define the running time of on
as the time it takes to compute using the compositional evaluation strategy. The map extends to a map
which sends an -well-defined semi-closed term to a realiser of the operation . The running time of on is then a second-order function
where measures the time it takes to compute using . We say that a strategy is polytime if it evaluates every semi-closed term of of fixed space type in polynomial time.
If and are compositional evaluation strategies for , we say that evaluates pointwise at least as fast as if:
Every operation which is computed in polynomial time by is computed in polynomial time by .
Every constant which is computed in polynomial time by is computed in polynomial time by .
We say that evaluates uniformly at least as fast as if:
evaluates pointwise at least as fast as .
For every free space of , if is the representation of which is used in , if is the corresponding representation which is used in , and if is the set of constants of type for which provides an algorithm, then there exists a polytime reduction of co-restrictions
Note that in the definition of the “pointwise” preorder there is essentially no difference between constants and -ary operations. However, the constants do play a special role in the definition of the “uniform” preorder, which is why we did not define them as -ary operations in the first place.
We say that pointwise/uniformly dominates if it evaluates pointwise/uniformly at least as fast as , but does not evaluate pointwise/uniformly at least as fast as . We say that is pointwise/uniformly polynomially optimal if it evaluates pointwise/uniformly at least as fast as any other compositional evaluation strategy for , i.e., if it is the greatest element in the respective preorder on evaluation strategies. We say that is pointwise/uniformly polynomially Pareto optimal if it is not pointwise/uniformly dominated by any other compositional evaluation strategy, i.e., if it is a maximal element in the respective preorder on evaluation strategies. These notions lift to families of representations in a straightforward manner: Let and be families of representations of a family of -spaces . Let be a structure whose set of free spaces is . We say that evaluates pointwise/uniformly at least as fast as if there exists an evaluation strategy for which uses the representations in to represent the free spaces of , such that evaluates pointwise/uniformly at least as fast as every evaluation strategy which uses the representations in to represent the free spaces of . We say that a family of representations is (pointwise/uniformly) polynomially (Pareto) optimal for if there exists a (pointwise/uniformly) polynomially (Pareto) optimal evaluation strategy for which uses the representations in to represent the free spaces of . Note that Pareto optimality for representations is slightly stronger than maximality with respect to the preorder on representations. An evaluation strategy for which uses a family of representation to represent its free spaces will also be called a -evaluation-strategy.
The following trivial proposition shows that the notion of Pareto optimality is reasonably robust:
If and are families of pairwise polytime equivalent representations then for every -evaluation-strategy for there exists a -evaluation-strategy which evaluates uniformly at least as fast, and vice versa.
If is a structure we obtain from by removing any set of constants of fixed space type or any set of operations which involves fixed spaces only, then a family of representations evaluates (pointwise or uniformly) at least as fast as another if and only if it evaluates at least as fast. In particular we can add or remove any collection of operations or constants which involve only the fixed spaces and obtain the same notion of (uniform) (Pareto) optimality.
If a family of representations is (Pareto) optimal for then it stays so if we add finitely many polytime computable constants or operations with respect to this family of representations (and the representations of the fixed spaces).
Let and be structures with the same set of free spaces and fixed spaces . If a family of representations is (Pareto) optimal both for and then it is (Pareto) optimal for the structure which is obtained by adding to all constants and all operations of .
Let us conclude with a remark on the definition of Pareto optimality: If is a strategy, let denote the set of semi-closed terms of of fixed space type which are evaluated in polynomial time by . Note that if evaluates pointwise at least as fast as then . Consequently, if is a pointwise polynomially optimal then is the greatest set of terms of fixed space type that can be evaluated in polynomial time by any compositional evaluation strategy for . By contrast, if is pointwise polynomially Pareto optimal it does not necessarily follow that is maximal. It may therefore seem more natural to define “pointwise Pareto optimality” to mean that be maximal. This notion however does not seem to be sufficiently uniform to rule out certain artificial constructions that prevent it from being interesting. We give examples of such constructions in Section 7.
4 Representations of
In this section we introduce a number of commonly used representations of the space of continuous functions over the interval and study their relation in the polytime-reducibility lattice. Most of these representations and their relationships have been studied already by Labhalla, Lombardi, and Moutai , albeit in a slightly different framework. Nevertheless, many proofs from  carry over easily to our chosen framework. The main new result is the equivalence of rational- and piecewise-polynomial approximations, which is left as an open question in .
We define representations , , , , , and of the space of continuous functions over the interval as follows:
A -name of a function is a monotone string function such that encodes a sampling of on dyadic rational points and encodes a modulus of uniform continuity of . More explicitly, we require
and for all :
A -name of a function is a fast converging Cauchy sequence of polynomials in the monomial basis with dyadic rational coefficients. More explicitly, fix a standard notation of the polynomials with dyadic rational coefficients. A -name of is a monotone string function such that
A -name of a function is a fast converging Cauchy sequence of piecewise polynomials in the monomial basis with dyadic rational breakpoints and coefficients. A piecewise polynomial with dyadic rational breakpoints and coefficients is a continuous function such that there exist dyadic rational numbers such that is a polynomial with dyadic rational coefficients.
A -name of a function is a fast converging Cauchy sequence of piecewise affine functions with dyadic breakpoints and coefficients. Piecewise affine functions are defined analogously to piecewise polynomials.
A -name of a function is a fast converging Cauchy sequence of rational functions with dyadic coefficients. A rational function is a quotient of two polynomials whose denominator has no zeroes in . We choose our notation such that every such rational function is given as a quotient of two polynomials which is normalised such that for all .
A -name of a function is a fast converging Cauchy sequence of piecewise rational functions with dyadic breakpoints and coefficients. Piecewise rational functions are defined analogously to piecewise polynomials and piecewise affine functions. We again assume that the denominator of every rational function is bounded below by .
The representation is the most efficient representation which renders evaluation computable, in the sense that it satisfies the following universal property:
Proposition 4 ().
The following are equivalent for a representation of continuous functions :
is polynomial-time -computable.
It is easy to see that evaluation is polytime computable with respect to . Hence, if , then evaluation is polytime computable with respect to . Conversely, assume that renders evaluation polytime computable. Given a -name of a function we can clearly evaluate on dyadic rational points in polynomial time, which yields “half” a -name of . It remains to show that a modulus of continuity of can be uniformly computed in polynomial time. Since renders evaluation polytime computable there exits a second-order polynomial which bounds the running time of some algorithm which computes . Since is compact, we can assume that the running time of the algorithm on input , where , , is bounded by the function (since the size of can be bounded independently of , cf. Remark 1). Since this function bounds the running time of a -algorithm which computes , it follows that is a modulus of continuity of . It is clearly second-order polytime computable in the name . ∎
Let be a continuous function. Then is polytime computable if and only if it has a polytime computable -name.
On the other hand, the representation is interesting since it allows for maximisation and integration in polynomial time. The following result is folklore, see e.g., [1, Algorithm 10.4]:
There exists a polytime algorithm which takes as input a dyadic polynomial , a rational number , and an accuracy requirement and outputs a list of disjoint intervals such that
Every interval contains a solution to the equation .
Every solution to the equation is contained in some interval.
Every interval has diameter .
are uniformly polytime computable with respect to .
Our goal is to fully understand the relationship between the representations we have just introduced with respect to polytime reducibility.
There exists a polytime algorithm which takes as input a piecewise rational function (in our standard notation) and returns as output a Lipschitz constant of .
If is a rational function with for all , then by the mean value theorem, a Lipschitz constant of is given by a bound on over . Since it suffices to compute a bound on the absolute value of the polynomial . If then for all . This is clearly computable in polynomial time. If is a piecewise rational function with pieces then a Lipschitz constant for is given by the maximum of the Lipschitz constants of the ’s. ∎
We have , , and .
The reductions , , and are immediate. It hence suffices to show . We will use the universal property of (Proposition 4) to do so, i.e., it suffices to prove that a piecewise rational function can be evaluated in a point in polynomial time.
Suppose we are given a piecewise rational function , a point encoded as a -name and an accuracy requirement . By Proposition 8 we can compute a Lipschitz constant of in polynomial time. Query the -name of for a dyadic rational approximation to error . We can determine an interval with and with in polynomial time. Now, a dyadic rational approximation to error of is computable in polynomial time. We have
Remarkably, the reduction reverses:
Theorem 10 ().
The proof of Theorem 10 relies mainly on Newman’s theorem  on the rational approximability of the absolute value function. To establish lower bounds in the reducibility lattice we need to employ Markov’s inequality. For a proof see e.g., .
Lemma 11 (Markov’s inequality).
Let be a polynomial of degree on the interval . Then
On the interval we hence have
We have and .
The absolute value function is trivially polytime -computable. By Markov’s inequality, it is not polytime -computable: Assume that is a sequence of polynomials such that for all . Then on the interval we have and on the interval we have . Let denote the degree of . Applying Markov’s inequality to the polynomial on the interval yields:
Applying the inequality to on yields:
If then this implies that converges to and at the same time, which is absurd. It follows that the size of grows exponentially in . In particular, cannot be polytime computable.
For the converse direction we show that the polynomial does not have a polynomial size -name. Consider a piecewise linear approximation to to error with breakpoints and values . We have , and hence for all :
We may hence assume without loss of generality that . Consider a segment . We have
Now, there exists a segment with . It follows that . ∎
Up to a result which is proved in the next section (Corollary 22), we arrive at a complete overview of the reducibility lattice:
The following diagram shows all reductions between the representations introduced, up to taking the transitive closure:
& PPolyr & Fracr l & PFracl r & FunPAff
No arrow reverses unless indicated.
Proposition 9 establishes the more obvious reductions. Proposition 12 implies that does not reduce to either or , for any such reduction would establish a reduction from to or vice versa. The reduction follows immediately from . The converse is Corollary 22 in Section 5. To see that , consider the family of functions . It is clearly uniformly polytime -computable in , but not uniformly polytime -computable, as any approximation to to error has a numerator of degree greater than . ∎
The class of polytime computable points with respect to the representation has a useful analytic characterisation which was proved by Labhalla, Lombardi, and Moutai  and strengthened by Kawamura, Müller, Rösnick, and Ziegler . For , , and let
denote the set of Gevrey’s  functions of level with growth parameters and . Note that corresponds to the class of analytic functions. The results in [12, 8] imply in particular that the above hierarchy collapses on for all fixed , , and :
It suffices to show that . Given a -name of a function
, compute a polynomial approximation via Chebyshev interpolation. Since the Chebyshev interpolation is a near-best approximation andcan be approximated efficiently by polynomials, the number of nodes we need in order to compute a polynomial approximation to error is bounded polynomially in . Since we know the constants , , and , we can choose the right number of nodes in advance. See [8, Proposition 21 (e), Theorem 23 (b)] for details. Also Note that the proof in  establishes a much stronger uniform result, where , , are not fixed but given as part of the input. ∎
Let for some constants . Then is polytime computable if and only if it has a polytime computable -name.
5 Bounded division for piecewise polynomials
We now establish the reduction by giving a polytime division algorithm for piecewise polynomials. Let be a continuous function. Let . A linear -interpolation of at is a piecewise linear function with breakpoints which satisfies .
There exists a polytime algorithm which takes as input a -name of a function , a list of points , and an error bound , and returns as output a linear -interpolation of at .
Algorithm 17 (Bounded Division).
Input: A non-constant polynomial with on . An accuracy requirement .
Output: A piecewise polynomial approximation to on to error .
Compute a Lipschitz constant of using Proposition 8 and use it to compute an upper bound on the range of of the form for some .
Use Theorem 6 to compute interval upper bounds on the solutions to the equations
to error .
Sort the intervals together with the boundary points (viewed as degenerate intervals) in ascending order to get a list
If two intervals should overlap, refine them such that they are either disjoint or their union has diameter smaller than . In the latter case replace them with their union.
Compute a linear -interpolation of at the centres of the intervals.
The iteration employed in Algorithm 17 is the well-known Newton-Raphson division method.
In a practical implementation, the iteration should involve size-reduction to avoid blow-up of the degree.
Algorithm 17 is correct.
Let be the union of the boundary points and the zeroes of and , sorted in an increasing order, so that is monotone and convex or concave on each . On , let
be the solutions of the equations