1 Introduction
Hybrid symbolicnumeric integration of rational functions is interesting for several reasons. First, a formula, not a number or a computer program or subroutine, may be desired, perhaps for further analysis such as by taking asymptotics. In this case one typically wants an exact symbolic answer, and for rational functions this is in principle always possible. However, an exact symbolic answer may be too cluttered with algebraic numbers or lengthy rational numbers to be intelligible or easily analyzed by further symbolic manipulation. See, e.g., Figure 1. Discussing symbolic integration, Kahan [7] in his typically dry way gives an example “atypically modest, out of consideration for the typesetter”, and elsewhere has rhetorically wondered: “Have you ever used a computer algebra system, and then said to yourself as screensful of answer went by, “I wish I hadn’t asked.” ” Fateman has addressed rational integration [5], as have Noda and Miyahiro [8, 9], for this and other reasons.
Second, there is interest due to the potential to carry symbolicnumeric methods for rational functions forward to transcendental integration, since the rational function algorithm is at the core of more advanced algorithms for symbolic integration. Particularly in the context of exact input, which we assume, it can be desirable to have an intelligible approximate expression for an integral while retaining the exact expression of the integral for subsequent symbolic computation. The ability to do this is a feature of one of our algorithms that alternative approaches, particularly those based on partial fraction decomposition, do not share.
Besides intelligibility and retention of exact results, one might be concerned with numerical stability, or perhaps efficiency of evaluation. We consider stability issues in Sections 4 and 6. We remark that the algorithm we present here has quite superior numerical stability in many cases, and has good structured backward error and highly accurate answers, while providing the more intelligible answers we desire.
We emphasize that the goal of this algorithm is not to produce numerical values of definite integrals of rational functions, although it can be used for such. The goal is to produce an intelligible formula for the antiderivative which is correct in an approximate sense: the derivative of the answer produced will be another rational function near to the input, and, importantly, of the same form in that the denominator will have the correct degrees of its factors in its squarefree factorization and the residues in its partial fraction decomposition will also have the same multiplicity.^{1}^{1}1Note that strict preservation of the form of the integrand is not quite achieved for the PFD method described below, since the derivative cannot be simplified into this form without using approximate gcd. Thus, with exact computation, the degree of the numerator and denominator is larger in general than the exact integrand.
1.1 SymbolicNumeric integration of Rational Functions
As indicated above, the combination of symbolic and numerical methods in the integration of rational functions is not new. Noda and Miyahiro [8, 9] developed a symbolicnumeric, or hybrid, method to integrate rational functions based on the use of the approximate symbolic algorithms for noisy data, numerical rootfinding and exact symbolic integration methods. Fateman [5] advocates a simpler hybrid approach, largely to produce a fast method that makes symbolic results more useful and more palatable, avoiding the “surd” or “RootOf” notation in the results of symbolic integrations. Both approaches work with the assumption that the input rational function has floating point coefficients.
For the extant symbolicnumeric algorithms for rational function integration, the approach is to be as sparing as possible in the use of symbolic algorithms to minimize their expense, in particular given that floating point input is assumed to be imprecise. In contrast, given that our working assumption is that the input rational function is exact, the present paper is dealing with a somewhat different problem, viz., the approach involves the injection of numerical methods into an otherwise purely symbolic context. As was pointed out above, the reasons such an approach is desirable include intelligibility, retention of exact results and stable or efficient evaluation.Since it is accuracy, speed and stability that matter in the context of scientific computing, a symbolic package that provides a suitable balance of these desiderata in a way that can be merged seamlessly with other scientific computations, as our implementation provides, has considerable advantages over CAS style symbolic computation with exact roots.
The usual approach to symbolic integration here begins with a rational function , with (ignoring any polynomial part, which can be integrated trivially) and computes an integral in two stages:

rational part: computes a rational function such that
(1) where the integral on the right hand side evaluates to a transcendental function (log and arctan terms);

transcendental part: computes the second (integral) term of the expression (1) above yielding, after postprocessing,
(2) , with being some algebraic extension of .
In symbolicnumeric algorithms for this process some steps are replaced by numeric or quasisymbolic methods. Noda and Miyahiro use an approximate Horowitz method (involving approximate squarefree factorization) to compute the rational part and either the RothsteinTrager (RT) algorithm or (linear/quadratic) partial fraction decomposition (PFD) for the transcendental part (see Section 2 for a brief review of these algorithms). The algorithm considered by Fateman avoids the two stage process and proceeds by numerical rootfinding of the denominator (with multiplicities) and PFD to compute both rational and transcendental parts. In both cases, the working assumption is that the input uses floating point numbers that are subject to uncertainty or noise and the numerical algorithms use double precision.
Part of the power of symbolic algorithms is their ability to preserve structural features of a problem that may be very difficult to preserve numerically. Particularly given our focus on exact input, then, we are interested in preserving as much structure of the problem as possible if we are to resort to the use of numerical methods for rootfinding. Our implementation relies on the sophisticated rootfinding package MPSolve, which provides a posteriori guaranteed bounds on the relative error of all the roots for a userspecified tolerance . To balance efficiency of computation and structurepreservation, we use more efficient symbolic algorithms where possible, such as the Hermite method for the rational part, and consider two methods of computing the transcendental part, one that computes the exact integral using the LazardRiobooTrager (LRT) method followed by numerical approximation, and the other that uses a multiprecision PFD method to compute a nearby integrand that splits over and then performs a structured integration. For more details on the symbolic algorithms, see Section 2. Our symbolicnumeric algorithms are discussed in Section 3.
The advantage of combining multiprecision numerical software with symbolic algorithms is that it allows for the user to specify a tolerance on the error of the symbolicnumeric computation. This, together with structured backward and forward error analysis of the algorithm, then allows the result to be interpreted in a manner familiar to users of numerical software but with additional guarantees on structurepreservation. We provide such an analysis of the structured error of our algorithm in Section 4.
An interesting feature of the backward stability of the algorithm is that it follows that the computed integral can be regarded as the exact integral of a slightly perturbed input integral, and, as stated previously, of the correct form (modulo the need for approximate gcd). Insofar as the input rational function is the result of model construction or an application of approximation theory it is subject to error. Thus, the input, though formally exact, is nevertheless still an approximation of a system it represents. Assuming the model approximation error is small, this means that the “true” rational function the input represents is some nearby rational function in a small neighbourhood of , for example in the sense of the space determined by variation of the coefficients of . The backward stability therefore shows that the integral actually computed is also a nearby rational function within another small neighbourhood, for which we have some control over its size. In a manner similar to numerical analysis, then, by an appropriate choice of tolerance, we can ensure that the latter neighbourhood is smaller than the former, so that the numerical perturbation of the problem is smaller than the model approximation error. The upshot of this is that the use of backward error analysis shows how a symbolicnumeric algorithm can be compatible with the spirit of uncertain input, even if the input is assumed to be exact. That is, we are assuming that the modeling procedure got the “right kind” of rational function to give as input, even if its data is uncertain in other ways.
This shows how a backward error analysis can be useful even in the context of exact integration. In the general case of exact input, however, a backward error analysis alone is not enough. This is why we also provide a forward error analysis, to provide a posteriori assurance of a small forward error sufficiently far away from singularities in the integrand. We also provide such an analysis in Section 4.
Our algorithm may be adapted to take truly uncertain input by using additional symbolicnumeric routines, such as approximate GCD and approximate squarefree factorization, in order to detect nearby problems with increased structure. Such an approach would shift the problem onto the pejorative manifold (nearest most singular problem, see [6]), which protects against illconditioning of the problem. The symbolicnumeric structured PFD integration we propose already takes this approach. Moreover, the structured error analysis of our algorithm then entails that the problem stays on the pejorative manifold after the computation of the integral. Since there have been considerable advances in algorithms for approximate polynomial algebra since the time of writing of [9], such as the ApaTools package of Zeng [12], the combination of error control and singular problem detection could yield a considerable advance over the early approach of Noda and Miyahiro.
2 Methods for Exact Integration of Rational Functions
We begin by reviewing symbolic methods for integrating rational functions.^{2}^{2}2The following review is based in part on the ISSAC 1998 tutorial [3] and the landmark text book [2] of M. Bronstein. Let be a rational function over not belonging to . There exist polynomials such that we have with and . Since is integrated trivially, we ignore the general case and assume that with . Furthermore, thanks to Hermite reduction, one can extract the rational part of the integral, leaving a rational function , with and squarefree, remaining to integrate. For the remainder of this section, then, we will assume that the function to integrate is given in the form , with and squarefree.
Partialfraction decomposition (PFD) algorithm. The partial fraction decomposition algorithm for rational functions in can be presented in different ways, depending on whether one admits complex numbers in expressions. We present a method based upon a complete factorization of the denominator over , followed by its conversion into an expression containing only constants from .
Consider the splitting of expressed in the form
separating real roots from complex conjugate pairs, where . Then there exist and such that
(3) 
The numerator quantities corresponding to the roots we call residues by analogy to complex analysis. Note that in the case here where is squarefree, the residues can be computed by the formula .
The real root terms are easily integrated to yield terms of the form . Extracting terms of the form from (3) we obtain pairs of complex log terms that can be combined to form a single real log term of the form . Extracting terms of the form from (3) and making use of the observation of Rioboo that , for (see [2], pp. 59ff.), we obtain a term in the integral of the form .
Where there are repeated residues in the PFD it is possible to combine terms of the integral together. The combination of logarithms with common simply requires computing the product of their arguments. For the arctangent terms the combination of terms with common can be accomplished by recursive application of the rule
(4) 
which is based on the fact that and Rioboo’s observation noted above.
A major computational bottleneck of the symbolic algorithms based on a PFD is the necessity of factoring polynomials into irreducibles over or (and not just over ) thereby introducing algebraic numbers even if the integrand and its integral are both in . Unfortunately, introducing algebraic numbers may be necessary: any field containing an integral of contains as well. A result of modern research are socalled rational algorithms that compute as much of the integral as can be kept within , and compute the minimal algebraic extension of necessary to express the integral.
The RothsteinTrager theorem.
It follows from the PFD of , i.e., , , that
(5) 
where the are the zeros of in and the are the residues of at the . Computing those residues without splitting into irreducible factors is achieved by the RothsteinTrager theorem, as follows. Since we seek roots of and their corresponding residues given by evaluating at the roots, it follows that the are exactly the zeros of the RothsteinTrager resultant where here is an indeterminate. Moreover, the splitting field of over is the minimal algebraic extension of necessary to express in the form given by Liouville’s theorem, i.e., as a sum of logarithms, and we have
(6) 
where is the irreducible factorization of over .
The LazardRiobooTrager algorithm. Consider the subresultant pseudoremainder sequence , where is the resultant (see p. 115 in [4]) of and w.r.t. . Observe that the resultant is a polynomial in of degree , the roots of which are the residues of . Let be a squarefree factorization of . Then, we have
(7) 
which groups together terms of the PFD with common residue, as determined by the multiplicity of in the squarefree factorization. We compute the sum as follows. If all residues of are equal, there is a single nontrivial squarefree factor with yielding , otherwise, that is, if , the sum is , where , where and stands for primitive part w.r.t. . Consequently, this approach requires isolating only the complex roots of the squarefree factors , whereas methods based on the PFD requires isolating the real or complex roots of the polynomial , where . However, the coefficients of (and possibly those of ) are likely to be larger than those of . Overall, depending on the example, the computational cost of root isolation may put at advantage any of those approaches in comparison to the others.
3 The Algorithms
We consider two symbolicnumeric algorithms, both based on Hermite reduction for the rational part and using two distinct methods for the transcendental part, one based on partial fraction decomposition and the other the LazardRiobooTrager algorithm, both reviewed in Section 2. Following the notation used in equation (1), we assume the rational part has been computed and we consider how the transcendental part is computed by the two methods. Both algorithms use MPSolve to control the precision on the root isolation step.
Following the notations used in equation (7), the LRTbased method proceeds by computing the subresultant chain and deciding how to evaluate each sum by applying the strategy of Lazard, Rioboo and Trager. However, we compute the complex roots of the polynomials numerically instead of representing them symbolically as in [11, 10]. Then, we evaluate each sum by an algorithm adapted to this numerical representation of the roots. This method is presented as Algorithm 1.
The PFDbased method begins by computing numerically the roots of the denominator and then computes exactly the resulting residues . The numerical rootfinding can break the structure of repeated residues, which we restore by detecting residues that differ by less than , the usersupplied tolerance. The resulting partial fraction decomposition can then be integrated using the structurepreserving strategy presented in section 2 above. This strategy allows to algorithm to replicate the structure of the final output from the LRT algorithm as a sum of real logarithms and arctangents. This method is presented as Algorithm 2.
We remark that there can be an issue here in principle as a result of roots of that are closer than . Given the properties of MPSolve, however, this is not an issue in practice, given the ability to compute residues exactly or with sufficiently high precision, because MPSolve isolates roots within regions where Newton’s method converges quadratically. In the unlikely event of residues that are distinct but within of each other, the algorithm still results in a small error and is advantageous in terms of numerical stability. This is because identifying nearby roots shifts the problem onto the nearest most singular problem, the space of which Kahan [6] calls the pejorative manifold, which protects against illconditioning.
Both methods take as input a univariate rational function over with , and a tolerance . Both and are expressed in the monomial basis. They yield as output an expression
(8) 
, along with a linear estimate of the forward and backward error. The backward error on an interval
is measured in terms of , where , and the forward error on is measured in terms of , where and are assumed to have the same constant of integration. Where has no real singularities, we provide bounds over , and where has real singularities the bounds will be used to determine how close to the singularity the error exceeds the tolerance.The main steps of Algorithm 1 and Algorithm 2 are listed below, where the numbers between parentheses refer to lines of the pseudocode below. Both algorithms begin with:
decompose into (rational part) and (transcendental part) using Hermite reduction; Algorithm 1 then proceeds with:
compute symbolically the transcendental part using LazardRiobooTrager algorithm; in the pseudocode
is a vector holding the squarefree factors of the resultant while
holds the primitive part of elements of the subresultant pseudoremainder sequence corresponding to elements of , viz., such that corresponding to is , where ;compute the roots of numerically using MPSolve to precision .
symbolic postprocessing in bpas: computation of the log and arctan terms. After Hermite reduction, Algorithm 2 continues with:
compute the roots of numerically using MPSolve to precision .
compute the residues of corresponding to the approximate roots of and detect their identity within .
compute identical residues within and then compute a correspondence (onemany relation) between a representative residue and its corresponding roots. correlates indices of selected elements of and indices of elements of .
compute symbolically the transcendental part
from the PFD of .
Both algorithms complete the integration by processing the arctangent terms, which can be written as or , for polynomials and , after the integration is complete, using Rioboo’s method (described in [2]) to remove spurious singularities. The result is the conversion of the arctangent of a rational function or twoargument arctangent into a sum of arctangents of polynomials.
4 Analysis of the Algorithm
We now consider the error analysis of the symbolicnumeric integration using LRT and PFD. We present a linear forward and backward error analysis for both methods.^{3}^{3}3Note that throughout this section we assume that the error for the numerical rootfinding for a polynomial satisfies the relation , where is the value of the computed root and is the distance in the complex plane to the exact root. This is accomplished using MPSolve by specifying an error tolerance of . Given the way that MPSolve isolates roots, the bound is generally satisfied by several orders of magnitude.
Theorem 1 (Backward Stability)
Given a rational function satisfying , and input tolerance , Algorithm 1 and Algorithm 2 yield an integral of a rational function such that for ,
where the principal term is , ranges over the evaluated roots and the function defined below is computable. This expression for the backward error is finite on any closed, bounded interval not containing a root of .
The advantage of exact computation on an approximate result is that the symbolic computation commutes with the approximation, i.e., we obtain the same result from issuing a given approximation and then computing symbolically as we do with computing symbolically first and then issuing the same approximation.^{4}^{4}4Although this comment is meant to explain the proof strategy, computing the symbolic result and then approximating also describes an alternative algorithm. Since this method requires lengthy computation of algebraic numbers that we would then approximate numerically anyway, we do not consider it. Thus, we will conduct the error analysis throughout this section by assuming that we have exact information and then approximate at the end.
Proof
[PFDbased backward stability] The PFD method begins by using Hermite reduction to obtain
(9) 
where is squarefree. Given the roots of we may obtain the PFD of , yielding
(10) 
where with . Taking account of identical residues, the expression (10) can then be integrated using the structured PFD algorithm described in Section 2. Since we approximate the roots of , we replace the exact roots with the approximations . This breaks the symmetry of the exactly repeated residues, thus the (exact) are modified in two ways: by evaluating at ; and restoring symmetry by adjusting the list of computed residues so that residues within of each other are identified. This strategy requires some method of selecting a single representative for the list of nearby residues; the error analysis then estimates the error on the basis of the error of this representative.^{5}^{5}5Note that we assume that is sufficiently small to avoid spurious identification of residues in this analysis. Even with spurious identification, however, the backward error analysis would only change slightly, viz., to use the maximum error among the nearby residues, rather than the error of the selected representative residue. We then represent this adjusted computed list of residues by . Since the Hermite reduction and PFD are equivalent to a rewriting of the input function as
the modified input that Algorithm 2 integrates exactly is obtained from the above expression by replacing and with and .
To compute the backward error we first must compute the sensitivity of the residues to changes in the roots. Letting , then to first order we find that
where . So the backward error for a given term of the PFD is
(11)  
(12)  
(13) 
Since any identified residues all approximate the same exact residue , we use the error for the residue selected to represent the identical residues.
Now, because the rational part of the integral is computed exactly, only the PFD contributes to the backward error. Given that is an exact root of
where unless the exact root is computed, and (and hence ) because is squarefree. Thus, we have that to first order, where . We therefore find that
(14) 
Since the summand is a rational function depending only on and , for fixed , the imaginary parts resulting from complex conjugate roots will cancel, so that only the real parts of the summand contribute to the backward error. We therefore find a first order expression of the backward error in the form of the theorem statement with
which is because is .
Note that, to properly account for the adjusted residue, applying the formula for in the PFD case requires taking to be the used to evaluate the representative residue.
Proof
[LRTbased backward stability]The LRT algorithm produces an exact integral of the input rational function in the form
(15) 
Given a list , of roots of , we can express the integral in the form
where is the number of nontrivial squarefree factors of . Taking the derivative of this expression we obtain an equivalent expression of the input rational function as
(16) 
The modified input that Algorithm 1 integrates exactly is obtained from this expression by replacing the exact roots with their approximate counterparts .
To compute the backward error, we must compute the sensitivity of (16) to changes of the roots. Considering as a function of the parameters , and letting , the difference between the exact root and the computed root, we find by taking partial derivatives with respect to the that
(17) 
Since , letting the rational function in square brackets be denoted by , we have that
Given that , we have that to first order, where . Since, as for the PFD case, the imaginary terms from complex roots cancel, we therefore find a first order expression for the backward error in the form required by the theorem with
where runs over the roots . This expression is because is .
Note that the backward error is structured, because the manner in which the integral is computed preserves structure in the integrand for both the LRTbased Algorithm 1 and the PFDbased Algorithm 2. The use of Hermite reduction guarantees that the roots of the denominator of have the same multiplicity as the roots of . Then the identification of nearby computed residues in Algorithm 2, and the use of the RothsteinTrager resultant in Algorithm 1, ensures that the multiplicity of residues in the PFD of is also preserved, so that the PFD of and have the same structure. This translates into higher degree arguments in the log and arctan terms of the integral than would be obtained by a standard PFD algorithm, leading to structured forward error as well.
It is important to reflect on the behaviour of these error terms near singularities of the integrand, which correspond to real roots of . For both algorithms, contains a particular polynomial in the denominator that evaluates to zero at the real roots, specifically and . In both cases, the expression of has a term with the particular polynomial squared, which therefore asymptotically dominates the value of the error term near the singularity. This fact is important for efficient computation of the size of the error term near a singularity, since the scaling behaviour can be used to quickly locate the boundary around the singularity where the error starts to exceed the tolerance. Our implementation discussed in Section 5 uses this scaling to compute such boundaries.
We turn now to the consideration of forward stability of the algorithms. We note that a full forward error analysis on this problem has subtleties on account of the numerical sensitivities of the log function. This does not affect the validity of the above analysis because near singularities the log term is dwarfed by the pole in the error term, so can be safely ignored in the computation of singularity boundaries. It is a concern when it comes to evaluation of the expressions of the integral. This issue is reflected in the mastery that went into Kahan’s “atypically modest” expression in [7], which is written to optimize numerical stability of evaluation. We can, however, sidestep such concerns through the careful use of multiprecision numerics where the value is needed.
Theorem 2 (Forward Stability)
Given a rational function and tolerance , Algorithm 1 and Algorithm 2 yield an integral of a rational function in the form (2) such that
where the leading term is , and range over the real and imaginary parts of evaluated roots, and the functions and defined below, corresponding to log and arctangent terms, respectively, are computable. This expression for the forward error is finite on any closed, bounded interval not containing a root of .
Proof
[LRTbased forward stability]We assume that we have computed the exact roots of the so that we can express the integral of the input rational function in the form
Since the roots , to get a real expression for the integral we can convert the transcendental part into a sum of logarithms and arctangents using the real and imaginary parts of the .
For the remainder of the proof we will assume that is a subsequence of the roots of the squarefree factors of the RothsteinTrager resultant such that each complex conjugate pair is only included once, and that is a mapping defined by so that is the term of the integral corresponding to the residue . For each we let and be its real and imaginary parts, respectively. This allows us to express the integral in terms of logarithms and arctangent terms such that
(18) 
where , and are functions of , and , and .
Once again, since the rational part of the integral is computed exactly, it does not contribute to the forward error. The forward error is the result of the evaluation of the above expression at approximate values for the and . Therefore, considering the variation of equation (18) with respect to changes in the and we obtain
(19) 
We now consider how to determine the values of , , and their partials from information in the computed integral. To simplify notation we let . If is real, then we obtain a term of the form . In the complex case, each stands for a complex conjugate pair. As such, we obtain terms of the form
Expressing in terms of real and imaginary parts as , so that , the expression of the term in the integral becomes or
The observation that has the same derivative as allows the term of the integral to be converted into the form of the summand in (18) with .
We can express
Comments
There are no comments yet.