Reduction-Based Creative Telescoping for Algebraic Functions

02/01/2016 ∙ by Shaoshi Chen, et al. ∙ Johannes Kepler University Linz Austrian Academy of Sciences 0

Continuing a series of articles in the past few years on creative telescoping using reductions, we develop a new algorithm to construct minimal telescopers for algebraic functions. This algorithm is based on Trager's Hermite reduction and on polynomial reduction, which was originally designed for hyperexponential functions and extended to the algebraic case in this paper.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The classical question in symbolic integration is whether the integral of a given function can be written in “closed form”. In its most restricted form, the question is whether for a given function  belonging to some domain there exists another function , also belonging to , such that . For example, if is the field of rational functions, then for we can find , while for no suitable exists. When no exists in , there are several other questions we may ask. One possibility is to ask whether there is some extension  of such that in there exists some with . For example, in the case of elementary functions, Liouville’s principle restricts the possible extensions , and there are algorithms which construct such extensions whenever possible. Another possibility is to ask whether for some modification of  there exists a such that . Creative telescoping is a question of this type. Here we are dealing with domains  containing functions in several variables, say and , and the question is whether there is a linear differential operator , nonzero and free of , such that there exists a with , where denotes the derivative of with respect to . Typically,  itself has the form for some operator (which may be zero and need not be free of ). In this case, we call a telescoper for , and a certificate for .

Creative telescoping is the backbone of definite integration. Readers not familiar with this technique are referred to the literature [17, 22, 24, 23, 15] for motivation, theory, algorithms, implementations, and applications. There are several ways to find telescopers for a given . In recent years, an approach has become popular which has the feature that it can find a telescoper without also constructing the corresponding certificate. This is interesting because certificates tend to be much larger than telescopers, and in some applications only the telescoper is of interest. This approach was first formulated for rational functions in [1] and later generalized to rational functions in several variables [3, 16], to hyperexponential functions [2] and, for the shift case, to hypergeometric terms [7] and binomial sums [4]. In the present paper, we will extend the approach to algebraic functions.

The basic principle of the general approach is as follows. Assume that the -constants form a field and that

is a vector space over the field of

-constants. Assume further that there is some -linear map such that for every there exists a with . Such a map is called a reduction. For example, in Hermite reduction [13] produces for every some such that is either zero or a rational function with a square-free denominator. In this case, we can take . In order to find a telescoper, we can compute , , , …, until we find that they are linearly dependent over . Once we find a relation , then, by linearity, , and then, by definition of , there exists a such that . In other words, is a telescoper.

There are two ways to guarantee that this method terminates. The first requires that we already know for other reasons that a telescoper exists. The idea is then to show that the reduction has the property that when is such that there exists a with , then . If this is the case and is a telescoper for , then is integrable in , so , and by linearity , …, are linearly dependent over . This means that the method won’t miss any telescoper. In particular, this argument has the nice feature that we are guaranteed to find a telescoper of smallest possible order . This approach was taken in [7]. The second way consists in showing that is a finite-dimensional vector space over . This approach was taken in [1, 2]. It has the nice additional feature that every bound for the dimension of this vector space gives rise to a bound for the order of the telescoper. In particular, it implies the existence of a telescoper.

In this paper, we show that Trager’s Hermite reduction for algebraic functions directly gives rise to a reduction-based creative telescoping algorithm via the first approach (Section 4). We will combine Trager’s Hermite reduction with a second reduction, called polynomial reduction (Section 5), to obtain a reduction-based creative telescoping algorithm for algebraic functions via the second approach (Section 6). This gives a new proof of a bound for the order of the telescopers, and in particular an independent proof for their existence.

A few years ago, Chen et al. [9] have already considered the problem of creative telescoping for algebraic functions. They have pointed out that by canceling residues of the integrand, a given creative telescoping problem can be reduced to a creative telescoping problem for a function with no residues, which may be much smaller than the original function. For this smaller function, however, they still need to construct a certificate. The algorithms presented in the present paper are the first which can find telescopers for algebraic functions without also constructing corresponding certificates. By Theorem 6 of [9], our results also translate into a certificate-free creative telescoping algorithm for rational functions in three variables.

2 Algebraic Functions

Throughout the paper, let be a field of characteristic zero, , and the algebraic closure of . We consider algebraic functions over . For some absolutely irreducible polynomial , we consider the field . If , then every element of can be written uniquely in the form for some .

The element is a solution of the equation , because in we have by construction. The polynomial also admits distinct solutions in the field

of formal Puiseux series around . There are also distinct solutions in the field

of formal Puiseux series around . Since and the are fields, we can associate to every and every in a natural way distinct series objects with fractional exponents, by plugging any of the distinct series solutions of into the representation . In other words, for every there are distinct natural ring homomorphisms from to or , respectively.

In the field as well as the fields and , we have natural differentiations with respect to . For a series, differentiation is defined termwise using the usual rules and . For the elements of , note first that implies


so . Regarding as element of and observing that , we have in , so is invertible in . Note that we have and for all , in particular also . The derivative of an arbitrary element , say for some of degree less than , is

Thus we have an action of the algebra of differential operators on .

The derivations on and on the series domains are compatible in the sense that for every , the series associated to are precisely the derivatives of the series associated to .

In the context of creative telescoping, we will also need to differentiate with respect to . The action of on and on the series domains is extended to an action of on and on the series domains. On , the action of is defined as the unique derivation with and , analogously to the construction above. For the series domains, acts on the coefficients (which are elements of ) in the natural way, and does not affect . Since each particular element belongs to a finite algebraic extension of , the result is uniquely determined. The actions of the larger operator algebra on and on the series domains are compatible to each other.

In this paper, the notation will always refer to the derivative with respect to , not with respect to .

Trager’s Hermite reduction for algebraic functions rests on the notion of integral bases. Let us recall the relevant definitions and properties. Although the elements of a Puiseux series ring are formal objects, the series notation suggests certain analogies with complex functions. Terms or are called integral if . A series in or is called integral if it only contains integral terms. A non-integral series is said to have a pole at the reference point. Note that in this terminology also has a pole at . Note also that the terminology only refers to but not to .

Integrality at is not preserved by differentiation, but if is integral at , then so is . Somewhat conversely, integrality at infinity is preserved by differentiation, we even have the stronger property that when is integral at infinity, then not only but also is integral at infinity.

An element is called (locally) integral at if for every series associated to the corresponding series for is integral. The element is called (globally) integral if it is locally integral at every (“at all finite places”). This is the case if and only if the minimal polynomial of in is monic with respect to . Because of Chevalley’s theorem [10, page 9, Corollary 3], any non-constant algebraic function has at least one pole. Equivalently, an element is integral at all if and only if it is constant.

For an element to have a “pole” at means that is not locally integral at ; to have a “double pole” at means that (or if ) is not integral; to have a “double root” at means that (or if ) is integral, and so on.

The set of all globally integral elements forms a -submodule of . A basis of this module is called an integral basis for . Such bases exist, and algorithms are known for computing them [20, 18, 21]. For a fixed , let be the ring of rational functions with , and write for the ring of all rational functions with . Then the set of all which are locally integral at some fixed forms a -module. A basis of this module is called a local integral basis at for . Also local integral bases can be computed.

An integral basis is always also a -vector space basis of . A key feature of integral bases is that they make poles explicit. Writing an element as a linear combination for some , we have that has a pole at if and only if at least one of the has a pole there.

Lemma 1.

Let be a local integral basis of at . Let and be such that . Then is integral at if and only if each is integral at .


The direction “” is obvious. To show “”, suppose that is integral at . Then there exist such that . Thus , and then for all , because is a vector space basis of . As elements of , the are integral at , and hence also all the are integral at .   

The lemma says in particular that poles of the in a linear combination have no chance to cancel each other.

Lemma 2.

Let be an integral basis of . Let and be such that

for and . Then is squarefree.


Let be a root of . We show that is not a multiple root. Since is integral, it is in particular locally integral at . Therefore is locally integral at . Since is an integral basis, it follows that for all . Because of , no factor of can be canceled by all the . Therefore the factor can appear in only once.   

Lemma 3.

Let be a local integral basis at infinity of . Let and be defined as in Lemma 2. Then for all .


Since every is locally integral at infinity, so is every . Since is an integral basis at infinity, it follows that for all . This means that for all , and therefore , as claimed.   

A -vector space basis of is called normal at if there exist such that is a local integral basis at . Trager shows how to construct an integral basis which is normal at infinity from a given integral basis and a given local integral basis at infinity [20].

Although normality is a somewhat weaker condition on a basis than integrality, it also excludes the possibility that poles in the terms of a linear combination of basis elements can cancel:

Lemma 4.

Let be a basis of  which is normal at some . Let for some . Then has a pole at if and only if there is some such that has a pole at .


Let be such that is a local integral basis at . By and by Lemma 1,  is integral at  iff all are integral at .   

3 Hermite Reduction

We now recall the Hermite reduction for algebraic functions [20, 12, 6]. Let be an integral basis for . Further let () be such that and . For describing the Hermite reduction we fix an integrand and represent it in the integral basis, i.e., with . The purpose is to find such that and with and denoting the squarefree part of . As differentiating the can introduce denominators, namely the factors of , it is convenient to consider those denominators from the very beginning on, which means that we shall assume . Note that can then be nontrivial. Let  be a nontrivial squarefree factor of  of multiplicity . Then  for some with and . One step of the Hermite reduction is as follows:


where and . The existence of such ’s and ’s follows from the crucial fact that the elements with form a local integral basis at each root of  [20, page 46]. By a repeated application of such reduction steps, one can decompose any as where the denominators of the coefficients of are squarefree and the coefficients of are proper rational functions (i.e., their numerators have smaller degree than their denominators).

It was observed that Hermite reduction itself often takes less time than the construction of an integral basis. If Hermite reduction is applied to some other basis, for instance the standard basis , it either succeeds or it runs into a division by zero. Bronstein [5] noticed that when a division by zero occurs, then the basis can be replaced by some other basis that is a little closer to an integral basis, just as much as is needed to avoid this particular division by zero. After finitely many such basis changes, the Hermite reduction will come to an end and produce a correct output. This variant is known as lazy Hermite reduction.

4 Telescoping via reductions: first approach

Recall from the introduction that reduction-based creative telescoping requires some -linear map with the property that is integrable in for every . This is sufficient for the correctness of the method, but additional properties are needed in order to ensure that the method terminates.

As also explained already in the introduction, one possibility consists in showing that whenever is integrable. Trager showed that his Hermite reduction has this property [20, page 50, Theorem 1]. For the sake of completeness, we reproduce his proof here.

Lemma 5.

Let be an integral basis for that is normal at infinity. Let be such that all its coefficients are proper rational functions. If an integral element has a pole at infinity, then also has a pole at infinity.


Since is assumed to be integral we can write it as with . If has a pole at infinity, there is at least one index  such that has a pole at infinity. There are two cases why this can happen.

  1. The polynomial  has positive degree. This means that has a pole at infinity, because the are proper rational functions. Thus has a pole at infinity, because has no poles at finite places and therefore no root at infinity.

  2. The integral basis element is not constant and is not zero. Hence has a pole at infinity, and this also implies that has a pole at infinity, again employing the fact that is a proper rational function.

In both cases, therefore, has a pole at infinity by Lemma 4.   

Theorem 6.

Suppose that has a double root at infinity (i.e., every series in associated to only contains monomials with ). Let be an integral basis for that is normal at infinity. If is the result of the Hermite reduction with respect to , then if and only if is integrable in .


The direction “” is trivial. To show the implication “” assume that is integrable in . From it follows that then also is integrable in ; let be such that . In order to show that , we show that is constant. To this end, it suffices to show that it has neither finite poles nor a pole at infinity; the claim then follows from Chevalley’s theorem.

It is clear that has no finite poles because has at most simple poles (i.e., all series associated to have only exponents ). This follows from the facts that the are integral and that the coefficients of  have squarefree denominators.

If has a pole at infinity, then by Lemma 5 also must have a pole at infinity, because Hermite reduction produces with proper rational functions . On the other hand, since has at least a double root at infinity by assumption, must have at least a single root at infinity. This is a contradiction.   

Note that the condition in Theorem 6 that has a double root at infinity is not a restriction at all, as it can always be achieved by a suitable change of variables. Let be a regular point; this means that all series in associated to are formal power series. By the substitution the regular point  is moved to infinity. From

we see that the new integrand has a double root at infinity.

Moreover, since the action of on series domains is defined coefficient-wise, it follows that when has at least a double root at infinity (with respect to ), then this is also true for , and then also for every -linear combination . Thus Theorem 6 implies that is a telescoper for if and only if .

We already know for other reasons [23, 11, 9] that telescopers for algebraic functions exist, and therefore the reduction-based creative telescoping procedure with Hermite reduction with respect to an integral basis that is normal at infinity as reduction function succeeds when applied to an integrand that has a double root at infinity. In particular, the method finds a telescoper of smallest possible order. Again, if has no double root at infinity, we can produce one by a change of variables. Note that a change of variables with has no effect on the telescoper.

Example 7.

We consider the algebraic function where is a solution of the third-degree polynomial equation . An integral basis for that is normal at infinity is given by , , . (This means that employing lazy Hermite reduction avoids completely the computation of an integral basis in this example.)

By solving Equation (1) for we obtain

Then for the differentiation matrix , a simple calculation yields

with . Thus we write with , , and . After a single step the Hermite reduction delivers the result

As the Hermite remainder  is nonzero, Theorem 6 tells us that is not integrable in . Hence we continue by applying Hermite reduction to

Note that we could as well take instead of , which in general should result in a faster algorithm. Again after a single reduction step, the decomposition is obtained, where

Since and are linearly independent over , we continue with . This time however, it is preferable to start the Hermite reduction with , which is given by

Setting and doing one reduction step, the Hermite remainder is found to be

The corresponding integrable part is not displayed here for space reasons.

Now one can find a linear dependence between that gives rise to the telescoper , which is indeed the minimal one for this example.

5 Polynomial Reduction

Recall that instead of requesting that if and only if is integrable (first approach), we can also justify the termination of reduction-based creative telescoping by showing that the -vector space has finite dimension (second approach). If is just the Hermite reduction, we do not have this property. We therefore introduce below an additional reduction, called polynomial reduction, which we apply after Hermite reduction. We then show that the combined reduction (Hermite reduction followed by polynomial reduction) has the desired dimension property for the space of remainders. As a result, we obtain a new bound on the order of the telescoper, which is similar to those in [9, 8].

In this approach, we use two integral bases. First we use a global integral basis (not necessarily normal at infinity) in order to perform Hermite reduction. Then we write the remainder with respect to some local integral basis at infinity and perform the polynomial reduction on this representation.

Throughout this section let be such that is a global integral basis of , and let and be such that and . The Hermite reduction described in Section 3 decomposes an input element into the form

with such that and is squarefree.

Lemma 8.

If is integrable in , then is in .


Suppose that is integrable in , i.e., there exist such that . Then

We show that is constant. Otherwise, for any irreducible factor of , we would have that has a pole of multiplicity greater than at the roots of . This contradicts the fact that are squarefree. Thus, is a constant.   

By the extended Euclidean algorithm, we compute such that and . Then the Hermite remainder  decomposes as


We now introduce the polynomial reduction whose goal is to confine the to a finite-dimensional vector space over . Similar reductions have been introduced and used in creative telescoping for hyperexponential functions [2] and hypergeometric terms [7]. Let be such that its entries form a -basis of , and let and be such that and . Let . Then


This motivates us to introduce the following definition.

Definition 9.

Let the map be defined by for any . We call the map for polynomial reduction with respect to , and call the subspace the subspace for polynomial reduction with respect to .

Note that, by construction and because of Lemma 8, is in if and only if is integrable in .

We can always view an element of (resp. ) as a polynomial in  with coefficients in  (resp. ). In this sense we use the notation for the leading coefficient and for the leading term of a vector (resp. matrix). For example, if is of the form

then , , and . Let be the standard basis of . Then the module viewed as a -vector space is generated by

We define ; as a -vector space it is generated by

Any element can be expressed in the basis as a vector (in the following the decoration  always indicates such a typecast).

Definition 10.

Let be the -subspace of generated by

Then . We call the standard complement of . For any , there exist and  such that

This decomposition is called the polynomial reduction of  with respect to .

Proposition 11.

Let and be such that , as before. If , then is a finite-dimensional -vector space.


In addition to the proof of the assertion, we also explain how to determine the dimension and a basis for , for later use. For brevity, let . We distinguish two cases.

Case 1.  Assume that . For any of degree , we have

Thus all monomials with and are not in . Let be the columns of , expressed in the basis . Let be the -subspace of generated by these column vectors. If , then for some , which implies that is a linear combination of ’s. Then . So and a basis of can be computed by looking at the echelon form of the matrix .

Case 2.  Assume that . For any of degree , we have

Let be the largest nonnegative integer such that

is an eigenvalue of

. Then for any , the matrix is invertible. So any monomial with is not in  for any . Let