1 Introduction
Checking or verifying a solution to a computational problem might be easier than computing a solution. In a certain sense, this is the contents of the famous hypothesis. In [39] Valiant made an attempt to clarify the principal relationship between the complexity of checking and evaluating. In particular, he asked whether any (string) function, for which values can be checked in polynomial time, can also be evaluated in polynomial time. Cryptographers hope that the answer to this question is negative, since it turns out to be intimately connected to the existence of oneway functions. Indeed, the inverse of a oneway function is not polynomial time computable, but membership to the graph of can be decided in polynomial time. The converse is also known to be true [20, 35] and equivalent to .
The goal of this paper is to investigate the relationship between the complexity of computational and decisional tasks in an algebraic framework of computation, a line of research initiated by Lickteig [29, 30]. Unless stated otherwise, denotes a fixed field of characteristic zero. Are there families of polynomials over , for which checking the value can be done with a polynomial number of arithmetic operations and tests, but which cannot be evaluated with a polynomial number of arithmetic operations? We do not know the answer to this question. However, we will be able to show that the answer is negative under the restriction that the degree of grows at most polynomially in . Actually, our result is slightly weaker in the sense that we know it to be true only for a notion of approximative complexity.
1.1 Decision, Computation, and Factors
We discuss a basic relationship between the complexity of decision and computation in our algebraic framework of computation and raise some natural open questions.
By the straightline complexity of a multivariate polynomial over we understand the minimal number of arithmetic operations sufficient to compute by a straightline program without divisions from the variables and constants in . The decision complexity of is defined as the minimal number of arithmetic operations and tests sufficient for an algebraic computation tree to decide for given points in whether . If , we allow also tests. Clearly, (the accounts for the zero test). We define the exclusion complexity of similarly as in [30, 11]
We clearly have . For formal definitions, we refer to [10].
In the following, we assume that is the irreducible generator of a hypersurface in and either or . Let be a nonzero polynomial multiple of , say with a polynomial coprime to and . Then any straightline program for can be used to exclude membership to the zeroset of : implies and the converse is also true, provided . Thus we may consider as a “generic decision complexity” of .
The following wellknown lemma provides a link between decisional and computational complexity (cf. [12, 3]). The proof is a rather straightforward consequence of the Nullstellensatz.
Lemma 1.1
Let be the irreducible generator of a hypersurface in or . Then .
Over the reals, we need both assumptions that is irreducible (see the comment on question (2) below) and that the zeroset of is a hypersurface (take over ). Over the complex numbers, one can relax these assumptions and show that if is squarefree with irreducible factors. Moreover, we remark that the conclusion of this lemma remains true over any infinite field if is the generator of the graph of a polynomial , that is, .
Under the assumption of the lemma we have . Asking about inequalities in the reverse direction, it is natural to raise the following questions:
(1)  
(2)  
(3) 
Again, denotes the irreducible generator of a hypersurface in or and denotes a polynomial. We have the following chain of implications: .
We believe that all these three questions have negative answers, but we have been unable to prove this for irreducible . However, the following counterexamples are known for questions (1) and (2) when allowing for reducible polynomials , and assuming for question (2).
Referring to question (1), there exist univariate polynomials having reducible factors with a complexity exponential in the complexity of , a fact first discovered by Lipton and Stockmeyer [32]. The simplest known example illustrating this is as follows: Consider , where . By repeated squaring we get . On the other hand, one can prove that for almost all the random factor has a complexity which is exponential in , cf. [10, Exercise 9.8]. A similar reasoning can be made over the reals using Chebychev polyomials.
Commenting on question (2), we remark that the answer is negative if we drop the irreducibility assumption and assume . This follows from the following trivial example from [12], which shows that may be exponentially larger than : Let have distinct real roots. Then using binary search, but if the roots of are algebraically independent over .
We regard to question (3), we note that its truth would imply that there are no “oneway functions” in the algebraic setting of computations with polynomials.
It would be interesting to find out whether the truth of the above questions is equivalent to the collapse of some complexity classes, similarly as in the bitmodel.
1.2 Main Results
The counterexamples discussed above established polynomials whose degree was exponential in the exclusion or decision complexity of , respectively. We restrict now our attention to factors having a degree polynomially bounded in the complexity of .
The Factor Conjecture from [7, Conj. 8.3] states that for polynomials
(4) 
A partial step towards establishing this conjecture is an older result due to Kaltofen [22], which can be seen as a byproduct of his achievements [23] to factor polynomials given by a straightline program representations (see also [25]). Kaltofen proved that the complexity of any factor of is polynomially bounded in the complexity of and in the degree and the multiplicity of the factor .
Before stating the precise result, let us fix some notation. For the remainder of this paper, denotes an upper bound on the complexity for the multiplication of two univariate polynomials of degree over , that is, for computing the coefficients of the product polynomial from the coefficients of the given polynomials. It is wellknown that for or , cf. [10]. We will assume that for and the subadditivity property .
Here is the precise statement of Kaltofen’s result, which was independently found by the author, compare [7, Thm. 8.14].
Theorem 1.2
Let with coprime polynomials . Let be the degree of . We suppose that is a field of characteristic zero. Then we have
Thus our Factor Conjecture claims that the dependence on the multiplicity can be omitted. It is known [22] that this is true in the case , in which case with , see Proposition 6.1 in the appendix.
The main result of this paper states that the dependence on the multiplicity can indeed be omitted when switching to an approximative complexity measure. The approximative complexity of a polynomial is the minimal cost of “approximative straightline programs” computing approximations of with any precision required. A formal definition will be given in Section 2.
The precise formulation is as follows:
Theorem 1.3
Let be a field of characteristic zero. Assume that by matrices over can be multiplied with arithmetic operations in . For of degree we have
Remark 1.4
An interesting consequence is the following degree bounded version of question (3):
Corollary 1.5
The approximative complexity of a polynomial is polynomially bounded in the decision complexity of the graph of and the degree of , namely . This remains true if we allow randomization with twosided error.
Coming back to the discussion of oneway functions, we remark that Sturtivant and Zhang [38] obtained the following related result, which excludes the existence of certain oneway functions in the algebraic framework of computation. Let be bijective such that as well as are polynomial mappings. Then the complexity to evaluate is polynomially bounded in the complexity to evaluate the inverse and the maximal degree of the component functions of . Again, it is unknown whether the degree restriction can be omitted.
The paper is organized as follows: In Section 2 we introduce the concept of approximative complexity. Section 3 contains the proof of the main result. We then shortly discuss some applications in Section 4, where we also build in the concept of approximative complexity into Valiant’s algebraic  framework [40, 42] (see also [10, 7]). Section 5 is devoted to a more detailed analysis of the concept of approximative complexity. Finally, the appendix contains a proof of Theorem 1.2.
For some other aspects of the issues discussed in this paper see [9].
Acknowledgments: Thanks go to Erich Kaltofen for communicating to me his paper [22] and to an anonymous referee for pointing out the reference [38]. I am grateful to Alan Selman for answering my questions about the complexity of oneway functions.
2 Approximative Complexity
In complexity theory it has proven useful to study “approximative algorithms”, which use arithmetic with infinite precision and nevertheless only give us an approximation of the solution to be computed, however with any precision required. This concept was systematically studied in the framework of bilinear complexity (border rank) and there it has turned out to be one of the main keys to the currently best known fast matrix multiplication algorithms [13]. We refer to [10, Chap. 15] and the references there for further information.
Although approximative complexity is a very natural concept, it has been investigated in less detail for computations of polynomials or rational functions. Originally, it had been introduced by Strassen in a topological way [37]. Griesser [18] generalized most of the known lower bounds for multiplicative complexity to approximative complexity. Lickteig systematically studied the notion of approximative complexity with the goal of proving lower bounds [30]. In Grigoriev and Karpinski [19] the notion of approximative complexity is also employed for proving lower bounds.
It is not known how to meaningfully relate the complexity of trailing coefficients or of factors of a polynomial to the complexity of the polynomial itself. However, by allowing approximative computations, we are able to establish quite satisfactory reductions in these cases. The deeper reason why this is possible seems to be the lower semicontinuity of the approximative complexity, which allows a controlled passage to the limit and can be used in perturbation arguments.
Assume the polynomial is expanded with respect to :
We do not know whether the complexity of the trailing coefficient can be polynomially bounded in the the complexity of
. However, we can make the following observation. For the moment assume that
is the field of real or complex numbers. We have and for all . Thus we can approximate with arbitrary precision by polynomials having complexity at most . We will say that has “approximate complexity” at most .In what follows, we will formalize this in an algebraic way; a topological interpretation will be given later. Throughout the paper, is a rational function field in the indeterminate over the field and denotes the local subring of consisting of the rational functions defined at . We write for the image of under the morphism induced by .
Definition 2.1
Let . The approximative complexity of the polynomial is the smallest natural number such that there exists in satisfying and . Here the complexity is to be interpreted with respect to the larger field of constants .
Even though refers to divisionfree straightline programs, divisions may occur implicitly since our model allows the free use of any elements of as constants (e.g., division by powers of ). In fact, the point is that even though is defined with respect to the morphism , the intermediate results of the computation may not be so! Note that .
We remark that the assumption that any elements of are free constants is just made for conceptual simplicity. We may as well require to build up the needed elements of from and elements of . It is easy to see that this would not change our main result (i.e., Theorem 1.3).
Assume that over , say and . Let be the supremum of over all and with . Then we have for such and that . Therefore, for each we can compute on input an approximation to with absolute error less than with only arithmetic operations. If we would additionally require in the definition of to build up the needed constants in from , then would even mean that one can compute an approximation with error less than with only arithmetic operations on input and .
Example 2.2
Let us illustrate the notion of approximative complexity with an example. The convex hull of the support of a polynomial
is called the Newton polytope of
. To a supporting hyperplane
of we may assign the corresponding initial term polynomialWe claim that
Indeed, we may obtain as a “degeneration” of as follows. Assume that is the equation of , say on . We can always achieve that , . Then we have
using the convenient, intuitive BigOh notation. Therefore, and which proves our claim. (Recall that the powers of are considered as constants.)
The next lemma states some of the basic properties of .
Lemma 2.3

(Semicontinuity) If is defined over and , then . Note that the quantity is welldefined for a polynomial over (adjoining a further indeterminate to ).

(Elimination of constants) Let be a field extension of of degree at most and be a polynomial over . Then , where denotes the approximative complexity of interpreted as a polynomial over (i.e., constants in may be used freely).

(Transitivity) The approximative complexity to compute from and the variables is defined in a natural way. We have , and an analogous inequality is true for the computation of several polynomials.
Proof. (1) We start with a general observation: Let be a rational function in two variables . We assume that , viewed as a rational function in over , is defined at with value . Moreover, we assume that the rational function is defined at with value . Then is defined at with value for sufficiently large . Indeed, if is the denominator of and , , then it is easy to check that it suffices to take .
Let now be such that and . An optimal computation of takes place in a finitely generated subring of . The morphism is defined on this subring if is chosen sufficiently large. Then we have for . If is chosen sufficiently large, we have by the observation at the beginning of the proof. This implies the claim.
(2) This follows easily from [7, Prop. 4.1(iii)].
(3) By definition there exists such that and . Moreover, there exists such that and
Let be an optimal straightline program computing from , variables , and constants in . We replace by a new indeterminate and denote the element thus corresponding to by (abusing notation). If we replace the input by , then the program , using the same constants in as , will compute an element . Clearly, .
Since the computation of takes place in a finitely generated subring of , the morphism is defined on this subring if is chosen large enough. If we denote the image of under this morphism by and the image of by , then we have
Moreover, we clearly have . By the transitivity of , we get . From the observation at the beginning of the proof of part (1) of the lemma, we conclude that for sufficiently large . This implies the claim.
We proceed with a topological interpretation of approximative complexity, which points out the naturality of this notion from a mathematical point of view. It will not be needed for the proof of the main Theorem 1.3.
Assume to be an algebraically closed field. There is a natural way to put a Zariski topology on the polynomial ring as a limit of the Zariski topologies on the finite dimensional subspaces for . If is the field of complex numbers, we may define the Euclidean topology on in a similar way.
If satisfies , then it easy to see that lies in the closure (Zariski or Euclidean) of the set . Indeed, we have for all but finitely many and . Alder [1] has shown that the converse is true and obtained the following topological characterization of the approximative complexity.
Theorem 2.4
Let be algebraically closed. The set is the closure of the set for the Zariski topology. If , this is also true for the Euclidean topology.
This essentially claims that is the largest lower semicontinuous function of bounded by . The proof of Theorem 5.7 in Section 5.2 implies the above result as a special case. We remark that this theorem can also be easily deduced from [10, Lemma 20.28]. One can show that the above statement is also true over the reals with the Euclidean topology, similar as in Lehmkuhl and Lickteig [26].
3 Approximative Complexity of Factors
We will supply here the proof of our main result Theorem 1.3. The outline of the proof is as follows: Let with coprime and and assume . After a suitable coordinate transformation one can interpret the zeroset of the factor locally as the graph of some analytic function . In order to cope with a possibly large multiplicity of , we apply a small perturbation to the polynomial without affecting its complexity too much. This results in a small perturbation of . We compute now the homogeneous parts of the perturbed by a Newton iteration up to a certain order. Using efficient polynomial arithmetic, this gives us an upper bound on the approximative complexity of the homogeneous parts of up to a predefined order (Proposition 3.4). In the special case, where the factor is the generator of the graph of a polynomial function, we are already done. This is essentially the contents of Section 3.2.
In a second step, elaborated in Section 3.3, we view the factor as the minimal polynomial of in over the field . We show that the Taylor approximations up to order uniquely determine the factor and compute the bihomogeneous components of with respect to the degrees in the variables and by fast linear algebra.
3.1 Preliminaries
The following result is obtained by a straightforward application of a technique introduced by Strassen [36] for the computation of homogeneous components and avoiding divisions. A proof will be sketched in Section 5.1.
Proposition 3.1
Assume that is the bihomogeneous decomposition of the polynomial with respect to the total degree in the variables and the degree . Thus is a homogeneous polynomial in the variables of degree . Then we have for all
and the same is true if the complexity is replaced by the approximative complexity .
Part (1) of the next lemma follows immediately from the wellknown algorithms for the multiplication and division of univariate power series described in [10, §2.4] by interpreting the homogeneous components of a multivariate power series as the adic coefficients of the transformed series . Part (2) of this lemma is obtained from part (1) by applying Horner’s rule.
Lemma 3.2

We can compute the homogeneous parts up to degree of the product and of the quotient (if ) of multivariate power series and from the homogeneous parts of and up to degree by arithmetic operations.

Assume that the multivariate power series and are given by their homogeneous parts up to degree . Then we can compute from this data the homogeneous parts of up to degree by arithmetic operations.
We remark that nonscalar operations are needed for the composition problem (2) in the generic case. For proving this, we assume that we have just one variable and choose for a constant power series: . Let . The problem then reduces to the simultaneous evaluation of for , a problem known to be of nonscalar complexity , see [10, Exercise 6.2].
3.2 Approximative Computation of Graph
We need the following lemma.
Lemma 3.3
Let be coprime, irreducible and . Then there is field extension of degree at most over and a point such that
Moreover, we may assume in this statement that if either or if and is the irreducible generator of a hypersurface in .
Proof. The claim for is a straightforward consequence of the Nullstellensatz. In the case we apply Theorem 4.5.1 in [5], which tells us that is Zariski dense in the zeroset of and that the vanishing ideal of this zeroset is generated by . This implies the claim.
In the general case, we apply a linear coordinate transformation , for suitable in order to achieve that is monic of degree with respect to the variable . From now on we write . Since is irreducible and are coprime, the resultants and in with respect to the variable are not the zero polynomials. We choose a point where these resultants do not vanish. From the properties of the resultant we conclude that the univariate polynomials and are coprime and that is squarefree. Let be a root of is some extension field of of degree at most . Then and do not vanish at and the point satisfies the claim of the lemma.
We assume now that we are in the situation of Theorem 1.3. Without loss of generality we may assume that is irreducible (apply Theorem 1.3 to the irreducible factors of and use the subadditivity and monotonicity of ). From now on we use the notations and .
Let , where and coprime such that . We choose the field extension and the point according to Lemma 3.3. To simplify notation, we assume that , an assumption which will be eliminated at the end of Section 3.3 at the price of an additional factor in the complexity bound.
We are now going to transform the polynomials into a special form by suitable linear transformations. By a coordinate shift we can always achieve that
. By a substitution we may achieve that the degree of in equals and that does not vanish. Indeed, if denotes the homogeneous component of of degree , then the coefficient of in equals . Moreover, . Hence it suffices to choose such that this linear combination does not vanish and such that . By scaling, we may assume without loss of generality that is monic with respect to . In the following, we will assume that this transformation has already been done, i.e., , which results in a complexity increase of of at most . Note that if all the variables occur in .Summarizing, we achieved the following by a suitable choice of a linear transformation:
(5) 
The implicit function theorem implies that there exists a unique formal power series such that
(6) 
Moreover, this power series can be recursively computed by the following Newton iteration: if we put and define
(7) 
then we have quadratic convergence of the towards , in the sense that , where denotes the maximal ideal of (cf. [10, Theorem 2.31]).
It is easy to see that if the partial derivative would not vanish, then the above power series could also be recursively computed by the Newton recursion (7) with replaced by . However, always vanishes for multiplicities . The key idea is now to enforce the nonvanishing of this partial derivative by a suitable perturbation of the given polynomial . By doing so, we have to content ourselves with an approximative computation of the factor .
Based on these ideas, we prove the following assuming the conditions (5):
Proposition 3.4
The homogeneous parts of of degree satisfy
Proof. Note that , viewed as a polynomial in over , is the minimal polynomial of over . W.l.o.g. we may assume that is not a rational function (otherwise , would be linear, and the claim obvious).
We define the perturbed polynomial over the coefficient ring . It is clear that and . By a straightforward calculation we get
Assumptions (5) tell us that with , hence
and we conclude that this partial derivative does not vanish ().
As in the reasoning before, the implicit function theorem implies that there exists a unique formal power series over the field such that and this power series can be recursively computed by the Newton iteration
(8) 
with quadratic convergence: .
Claim: is defined over the coefficient ring for all .
We prove this claim by induction on , the induction start being clear. So let us assume that is defined over and set . By applying the morphism we obtain
The first parenthesis maps under the substitution to , which is nonzero by our assumptions. The second factor can only vanish if since the power series is uniquely determined by the conditions (6). In this case, would be a rational function, which we have excluded at the beginning of the proof. We have thus shown that nonzero. By equation (8) this implies that is defined over and proves the claim.
The claim implies that is defined over . From we get , hence , as . We conclude that . If denotes the homogeneous part of of degree , we have for . This implies for that
As a word of warning, we point out that a certain care in these argumentations is necessary. For instance, Example 3.5 below shows that in general .
We turn now to the algorithmic analysis of the proof. First of all we note that . A moment’s thought shows that also . In order to prove the proposition it is enough to show that
(9) 
where . In fact, by the semicontinuity of (Lemma 2.3(1)), we only need to prove this estimate for approximative complexity on the lefthand side.
The following computation deals with polynomials in the variables, which are truncated at a certain degree and represented by their homogeneous parts up to this degree. We obtain from Proposition 3.1 for the bihomogeneous decomposition of that
(10) 
In the following, we assume that we have already computed the bihomogeneous components for .
Inductively, we suppose that we have computed the homogeneous parts of up to degree . The main work of one Newton step (8) consists in the computation of the substituted polynomials and . By Lemma 3.2 we can compute the homogeneous parts up to degree of by arithmetic operations. Analogously, we get the homogeneous parts up to degree of by the same number of arithmetic operations. By a division and a subtraction we obtain from this the homogeneous parts of up to degree using further arithmetic operations. Altogether, we obtain
by the monotonicity and subadditivity of . The assertion (9) follows from this estimate and equation (10) by the transitivity of approximative complexity (Lemma 2.3(3)).
Example 3.5
Consider the bivariate polynomial and put , . Then the conditions (5) are satisfied. The first Newton iterate according to (7) satisfies and the power series defined by (6) has the expansion
As in the proof of Proposition 3.4 we set . A straightforward computation (e.g., using a computer algebra system) yields for the first Newton approximation according to (8) that
Therefore, . On the other hand, we note that the expansion of starts as follows
and we see that . Note that the fourth order term of this expansion is not defined for even though is defined under this substitution!
3.3 Reconstruction of Minimal Polynomial
Consider the bihomogeneous decomposition . Let be an additional indeterminate and perform the substitution . The condition then translates to
for any . Moreover, we have and for , since is monic of degree in . The next lemma states that these conditions uniquely determine the bihomogeneous components of if we choose . The proof is based on wellknown ideas from the application of the LLLalgorithm to polynomial factoring [28] (see also [17, Lemma 16.20]), adapted from to the setting of a polynomial ring.
Lemma 3.6
By comparing the coefficients of the powers of the indeterminate , one can interpret the conditions
as a system of linear equations over the field in the unknowns . (There are equations and unknowns). This linear system has as the unique solution the bihomogeneous components of .
Proof. We define the bivariate polynomial over and assign to a solution of the above linear system of equations the bivariate polynomial . Note that is an irreducible polynomial in over since we assume to be irreducible and monic with respect to . The polynomial is an approximative common root of and in the sense that
The resultant of and with respect to satisfies the degree estimate
which is easily seen from the description of the resultant as the determinant of the Sylvester matrix (cf. [17, §6.3]). It is wellknown that there exist polynomials such that . Substituting the approximative common root for in this equation implies that , hence the resultant vanishes. Since is irreducible, it must be a factor of over . However, we assume and both to be monic with respect to
Comments
There are no comments yet.