Algebraic Diagonals and Walks

10/14/2015 ∙ by Alin Bostan, et al. ∙ Inria 0

The diagonal of a multivariate power series F is the univariate power series Diag(F) generated by the diagonal terms of F. Diagonals form an important class of power series; they occur frequently in number theory, theoretical physics and enumerative combinatorics. We study algorithmic questions related to diagonals in the case where F is the Taylor expansion of a bivariate rational function. It is classical that in this case Diag(F) is an algebraic function. We propose an algorithm that computes an annihilating polynomial for Diag(F). Generically, it is its minimal polynomial and is obtained in time quasi-linear in its size. We show that this minimal polynomial has an exponential size with respect to the degree of the input rational function. We then address the related problem of enumerating directed lattice walks. The insight given by our study leads to a new method for expanding the generating power series of bridges, excursions and meanders. We show that their first N terms can be computed in quasi-linear complexity in N, without first computing a very large polynomial equation.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Context. The diagonal of a multivariate power series with coefficients  is the univariate power series with coefficients . Particularly interesting is the class of diagonals of rational power series (ie, Taylor expansions of rational functions). In particular, diagonals of bivariate rational power series are always roots of nonzero bivariate polynomials (ie, they are algebraic series) [22, 15]. Since it is also classical that algebraic series are D-finite (ie, satisfy linear differential equations with polynomial coefficients), their coefficients satisfy linear recurrences and this leads to an optimal algorithm for the computation of their first terms [11, 12, 3]. In this article, we determine the degrees of these polynomials, the cost of their computation and related applications.

Previous work. The algebraicity of bivariate diagonals is classical. The same is true for the converse; also the property persists for multivariate rational series in positive characteristic [15, 24, 13]. The first occurrence we are aware of in the literature is Pólya’s article [22], which deals with a particular class of bivariate rational functions; the proof uses elementary complex analysis. Along the lines of Pólya’s approach, Furstenberg [15] gave a (sketchy) proof of the general result, over the field of complex numbers; the same argument has been enhanced later [18],[26, §6.3]. Three more different proofs exist: a purely algebraic one that works over arbitrary fields of characteristic zero [17, Th. 6.1] (see also [26, Th. 6.3.3]), one based on non-commutative power series [14, Prop. 5], and a combinatorial proof [6, §3.4.1]. Despite the richness of the topic and the fact that most proofs are constructive in essence, we were not able to find in the literature any explicit algorithm for computing a bivariate polynomial that cancels the diagonal of a general bivariate rational function.

Diagonals of rational functions appear naturally in enumerative combinatorics. In particular, the enumeration of unidimensional walks has been the subject of recent activity, see [1] and the references therein. The algebraicity of generating functions attached to such walks is classical as well, and related to that of bivariate diagonals. Beyond this structural result, several quantitative and effective results are known. Explicit formulas give the generating functions in terms of implicit algebraic functions attached to the set of allowed steps in the case of excursions [8, §4],[17], bridges and meanders [1]. Moreover, if and denote the upper and lower amplitudes of the allowed steps, the bound on the degrees of equations for excursions has been obtained by Bousquet-Mélou, and showed to be tight for a specific family of step sets, as well as generically [7, §2.1]. From the algorithmic viewpoint, Banderier and Flajolet gave an algorithm (called the Platypus Algorithm) for computing a polynomial of degree that annihilates the generating function for excursions [1, §2.3].

Contributions. We design (Section 4) the first explicit algorithm for computing a polynomial equation for the diagonal of an arbitrary bivariate rational function. We analyze its complexity and the size of its output in Theorem 14. The algorithm has two main steps. The first step is the computation of a polynomial equation for the residues of a bivariate rational function. We propose an efficient algorithm for this task, that is a polynomial-time version of Bronstein’s algorithm [9]; corresponding size and complexity bounds are given in Theorem 10. The second step is the computation of a polynomial equation for the sums of a fixed number of roots of a given polynomial. We design an additive version of the Platypus algorithm [1, §2.3] and analyze it in Theorem 12. We show in Proposition 16 that generically, the size of the minimal polynomial for the diagonal of a rational function is exponential in the degree of the input and that our algorithm computes it in quasi-optimal complexity (Theorem 14).

In the application to walks, we show how to expand to high precision the generating functions of bridges, excursions and meanders. Our main message is that pre-computing a polynomial equation for them is too costly, since that equation might have exponential size in the maximal amplitude  of the allowed steps. Our algorithms have quasi-linear complexity in the precision of the expansion, while keeping the pre-computation step in polynomial complexity in  (Theorem 18).

Structure of the paper. After a preliminary section on background and notation, we first discuss several special bivariate resultants of broader general interest in Section 3. Next, we consider diagonals, the size of their minimal polynomials and an efficient way of computing annihilating polynomials in Section 4.

2 Background and Notation

In this section, that might be skipped at first reading, we introduce notation and technical results that will be used throughout the article.

2.1 Notation

In this article, denotes a field of characteristic 0. We denote by the set of polynomials in of degree less than . Similarly, stands for the set of rational functions in with numerator and denominator in , and for the set of power series in truncated at precision .

If  is a polynomial in , then its degree with respect to (resp. ) is denoted (resp. ), and the bidegree of is the pair . The notation is used for univariate polynomials. Inequalities between bidegrees are component-wise. The set of polynomials in of bidegree less than is denoted by , and similarly for more variables.

The valuation of a polynomial  or a power series  is its smallest exponent with nonzero coefficient. It is denoted , with the convention .

The reciprocal of a polynomial  is the polynomial . If , the notation stands for the generating series of the Newton sums of :

A squarefree decomposition of a nonzero polynomial , where or , is a factorization , with squarefree, the ’s pairwise coprime and . The corresponding squarefree part of  is the polynomial . If is squarefree then .

The coefficient of  in a power series is denoted . If , then denotes the polynomial . The exponential series is denoted . The Hadamard product of two power series  and  is the power series  such that for all .

If is a bivariate power series in , the diagonal of , denoted is the univariate power series in  defined by

2.2 Bivariate Power Series

In several places, we need bounds on degrees of coefficients of bivariate rational series. In most cases, these power series belong to  and have a very constrained structure: there exists a polynomial  and an integer  such that the power series can be written

with and , for all . We denote by the set of such power series. Its main properties are summarized as follows.

Lemma 1

Let , and .

  1. The set is a subring of ;

  2. Let with , then ;

  3. The products obey


For (3), if and belong respectively to  and , then the th coefficient of their product is a sum of terms of the form . Therefore, the degree of the numerator is bounded by , whence (3) is proved. Property (1) is proved similarly. In Property (2), the condition on  makes  well-defined. The result follows from (1). ∎

As consequences, we deduce the following two results.

Corollary 2

Let  with be such that . Let be a squarefree part of . Then


Write with . Then the result when is squarefree () follows from Part (2) of Lemma 1, with . The general case then follows from Parts (1,3). ∎

Proposition 3

Let  and  be polynomials in , with , and . Then for all ,

with .


The Taylor expansion of has for coefficients the derivatives of . We consider it either in  or in  . Corollary 2 applies directly for the degree in . The saving on the degree in  follows from observing that in the first part of the proof of the corollary, the decomposition  has the property that . This is then propagated along the proof thanks to Part (3) of Lemma 1. ∎

2.3 Complexity Estimates

We recall classical complexity notation and facts for later use. Let

be again a field of characteristic zero. Unless otherwise specified, we estimate the cost of our algorithms by counting arithmetic operations in

(denoted “ops.”) at unit cost. The soft-O notation indicates that polylogarithmic factors are omitted in the complexity estimates. We say that an algorithm has quasi-linear complexity if its complexity is , where is the maximal arithmetic size (number of coefficients in  in a dense representation) of the input and of the output. In that case, the algorithm is said to be quasi-optimal.

Univariate operations. Throughout this article we will use the fact that most operations on polynomials, rational functions and power series in one variable can be performed in quasi-linear time. Standard references for these questions are the books [16] and [10]. The needed results are summarized in Fact 4 below.

Fact 4

The following operations can be performed in ops. in :

  1. addition, product and differentiation of elements in , and ; integration in and ;

  2. extended gcd, squarefree decomposition and resultant in ;

  3. multipoint evaluation in , at points in

    ; interpolation in

    and from (resp. ) values at pairwise distinct points in ;

  4. inverse, logarithm, exponential in (when defined);

  5. conversions between and .

Multivariate operations. Basic operations on polynomials, rational functions and power series in several variables are hard questions from the algorithmic point of view. For instance, no general quasi-optimal algorithm is currently known for computing resultants of bivariate polynomials, even though in several important cases such algorithms are available [4]. Multiplication is the most basic non-trivial operation in this setting. The following result can be proved using Kronecker’s substitution; it is quasi-optimal for fixed number of variables .

Fact 5

Polynomials in and power series in can be multiplied using ops.

A related operation is multipoint evaluation and interpolation. The simplest case is when the evaluation points form an

-dimensional tensor product grid

, where is a set of cardinal .

Fact 6

[20] Polynomials in can be evaluated and interpolated from values that they take on points that form an -dimensional tensor product grid using ops.

Again, the complexity in Fact 6 is quasi-optimal for fixed .

A general (although non-optimal) technique to deal with more involved operations on multivariable algebraic objects (eg, in ) is to use (multivariate) evaluation and interpolation on polynomials and to perform operations on the evaluated algebraic objects using Facts 46. To put this strategy in practice, the size of the output needs to be well controlled. We illustrate this philosophy on the example of resultant computation, based on the following easy variation of [16, Thm. 6.22].

Fact 7

Let and be bivariate polynomials of respective bidegrees and . Then,

Lemma 8

Let and be polynomials in . Then belongs to , where . Moreover, the coefficients of can be computed using ops. in .


The degrees estimates follow from Fact 7. To compute , we use an evaluation-interpolation scheme: and are evaluated at points forming an dimensional tensor product grid; univariate resultants in are computed; is recovered by interpolation. By Fact 6, the evaluation and interpolation steps are performed in ops. The second one has cost . Using the inequality concludes the proof. ∎


We conclude this section by recalling a complexity result for the computation of a squarefree decomposition of a bivariate polynomial.

Fact 9

[19] A squarefree decomposition of a polynomial in can be computed using ops.

3 Special Resultants

3.1 Polynomials for Residues

We are interested in a polynomial that vanishes at the residues of a given rational function. It is a classical result in symbolic integration that in the case of simple poles, there is a resultant formula for such a polynomial, first introduced by Rothstein [23] and Trager [27]. This was later generalized by Bronstein [9] to accommodate multiple poles as well. However, as mentioned by Bronstein, the complexity of his method grows exponentially with the multiplicity of the poles. Instead, we develop in this section an algorithm with polynomial complexity.

Let be a nonzero element in , where are two coprime polynomials in . Let be a squarefree decomposition of . For , if is a root of  in an algebraic extension of , then it is simple and the residue of  at  is the coefficient of  in the Laurent expansion of at . If  is the polynomial , this residue is the coefficient of  in the Taylor expansion at  of the regular rational function , computed with rational operations only and then evaluated at . If this coefficient is denoted , with polynomials and , the residue at  is a root of . When , this is exactly the Rothstein-Trager resultant. This computation leads to Algorithm 1, which avoids the exponential blowup of the complexity that would follow from a symbolic pre-computation of the Bronstein resultants.

Algorithm AlgebraicResidues

Algorithm 1: Polynomial canceling the residues

Let be an integer, and let be the rational function . The poles have order . In this example, the algorithm can be performed by hand for arbitrary : a squarefree decomposition has  and , the other ’s being 1. Then  and the next step is to expand

Expanding the binomial series gives the coefficient of  as , with

The residues are then cancelled by , namely


Bounds. In our applications, as in the previous example, the polynomials and  have coefficients that are themselves polynomials in another variable . Let then , , and be the bidegrees in of , , and , where  is a squarefree part of . In Algorithm 1, has degree at most  in and total degree  in . Similarly, has degree  in  and total degree  in . When , by Proposition 3, the coefficient  in the power series expansion of has denominator of bidegree bounded by  and numerator of bidegree bounded by . Thus by Fact 7, is at most

while its degree in  is bounded by the number of residues . Summing over all  leads to the bound

If , a direct computation gives the bound .

Theorem 10

Let . Let be a squarefree part of wrt y. Let  be bounds on the bidegree of . Then the polynomial computed by Algorithm 1 annihilates the residues of , has degree in  bounded by  and degree in  bounded by

It can be computed in operations in .

Note that both bounds above (when and ) are upper bounded by , independently of the multiplicities. The complexity is also bounded independently of the multiplicities by .


The bounds on the bidegree of are easily derived from the previous discussion.

By Fact 9, a squarefree decomposition of can be computed using ops. We now focus on the computations performed inside the th iteration of the loop. Computing requires an exact division of polynomials of bidegrees at most ; this division can be performed by evaluation-interpolation in ops. Similarly, the trivariate polynomial can be computed by evaluation-interpolation wrt in time . By the discussion preceding Theorem 10, both and have bidegrees at most , where and . They can be computed by evaluation-interpolation in ops. Finally, the resultant has bidegree at most , and since the degree in  of and  is at most , it can be computed by evaluation-interpolation in ops by Lemma 8. The total cost of the loop is thus , where

Using the (crude) bounds , , and shows that is bounded by

which, by using the inequalities and , is seen to belong to .

Gathering together the various complexity bounds yields the stated bound and finishes the proof of the theorem. ∎

Remark. Note that one could also use Hermite reduction combined with the usual Rothstein-Trager resultant in order to compute a polynomial that annihilates the residues. Indeed, Hermite reduction computes an auxiliary rational function that admits the same residues as the input, while only having simple poles. A close inspection of this approach provides the same bound for the degree in of , but a less tight bound for its degree in , namely worse by a factor of . The complexity of this alternative approach appears to be (using results from [2]), to be compared with the complexity bound from Theorem 10.

3.2 Sums of roots of a polynomial

Given a polynomial  of degree  with coefficients in a field  of characteristic 0, let be its roots in the algebraic closure of . For any positive integer , the polynomial of degree defined by


has coefficients in . This section discusses the computation of  summarized in Algorithm 2, which can be seen as an additive analogue of the Platypus algorithm of Banderier and Flajolet [1].

Algorithm PureComposedSum

Algorithm 2: Polynomial canceling the sums of roots

We recall two classical formulas (see, eg, [4, §2]), the second one being valid for monic only::


Truncating these formulas at order  makes  a representation of the polynomial  (up to normalization), since both conversions above can be performed quasi-optimally by Newton iteration [25, 21, 4]. The key for Algorithm 2 is the following variant of [1, §2.3].

Proposition 11

Let  be a polynomial of degree , let denote the generating series of its Newton sums and let be the series . Let be the polynomial in defined by

Then the following equality holds


By construction, the series is

When applied to the polynomial , this becomes

This expression rewrites:

and the last expression equals . ∎

The correctness of Algorithm 2 follows from observing that the truncation orders in and in of the power series involved in the algorithm are sufficient to enable the reconstruction of  from its first Newton sums by (3).

Bivariate case. We now consider the case where  is a polynomial in . Then, the coefficients of wrt may have denominators. We follow the steps of Algorithm 2 (run on viewed as a polynomial in  with coefficients in ) in order to compute bounds on the bidegree of the polynomial obtained by clearing out these denominators. We obtain the following result.

Theorem 12

Let , let be a positive integer such that and let . Let denote the leading coefficient of wrt and let be defined as in Eq. (2). Then is a polynomial in of bidegree at most that cancels all sums of roots of , with . Moreover, this polynomial can be computed in ops.

This result is close to optimal. Experiments suggest that for generic of bidegree the minimal polynomial of has bidegree . In particular, our degree bound is precise in , and overshoots by a factor of only in . Similarly, the complexity result is quasi-optimal up to a factor of only.


The Newton series has the form

with . Since both factors belong to , Lemma 1 implies that . Applying this same lemma repeatedly, we get that (stability under the integration of Algorithm 2 is immediate). Since has degree wrt , we deduce that is a polynomial that satisfies the desired bound. By evaluation and interpolation at points, and Newton iteration for quotients of power series in (Fact 4), the power series can be computed in ops. The power series is then computed from in ops. To compute we use evaluation-interpolation wrt at points, and fast exponentials of power series (Fact 4). The cost of this step is ops. Then, is computed for additional ops. The last exponential is again computed by evaluation-interpolation and Newton iteration using ops. ∎

4 Diagonals

4.1 Algebraic equations for diagonals

The relation between diagonals of bivariate rational functions and algebraic series is classical [15, 22]. We recall here the usual derivation when  while setting our notation.

Let be a rational function in , whose denominator does not vanish at . Then the diagonal of is a convergent power series that can be represented for small enough  by a Cauchy integral

where the contour is for instance a circle of radius  inside an annulus where  remains in the domain of convergence of . This is the basis of an algebraic approach to the computation of the diagonal as a sum of residues of the rational function

with  and  two coprime polynomials. For  small enough, the circle can be shrunk around 0 and only the roots of  tending to 0 when  lie inside the contour [18]. These are called the small branches. Thus the diagonal is given as


where the sum is over the distinct roots of  tending to 0. We call their number the number of small branches of  and denote it by .

Since the ’s are algebraic and finite in number and residues are obtained by series expansion, which entails only rational operations, it follows that the diagonal is algebraic too. Combining the algorithms of the previous section gives Algorithm 3 that produces a polynomial equation for . The correctness of this algorithm over an arbitrary field of characteristic 0 follows from an adaptation of the arguments of Gessel and Stanley [17, Th. 6.1],[26, Th. 6.3.3].

Let be an integer, and let be the rational function . The diagonal of is equal to

By the previous argument, it is an algebraic series, which is the sum of the residues of the rational function  of Example 3.1 over its small branches (with replaced by ). In this case, the denominator is . It has one solution tending to 0 with ; the other one tends to . Thus the diagonal is cancelled by the quadratic polynomial (1).

Algorithm AlgebraicDiagonal()

Algorithm 3: Polynomial canceling the diagonal of a rational function

For an integer , we consider the rational function

of bidegree . The first step of the algorithm produces

whose denominator is irreducible with small branches. Running Algorithm 3 on this example, we obtain a polynomial  annihilating , which is experimentally irreducible and whose bidegrees for are . From these values, it is easy to conjecture that the bidegree is given by

of exponential growth in the bidegree of . In general, these bidegrees do not grow faster than in this example. In Theorem 14, we prove bounds that are barely larger than the values above.

4.2 Degree Bounds and Complexity

The rest of this section is devoted to the derivation of bounds on the complexity of Algorithm 3 and on the size of the polynomial it computes, which are given in Theorem 14.

Degrees. A bound on the bidegree of will be obtained from the bounds successively given by Theorems 10 and 12.

In order to follow the impact of the change of variables in the first step, we define the diagonal degree of a polynomial as the integer We collect the properties of interest in the following.

Lemma 13

For any  and  in ,

  1. ;

  2. ;

  3. there exists a polynomial , such that
    , with and

  4. .


Part (1) is immediate. The quantity is nothing else than , which makes Parts (2) and (3) clear too. From there, we get the identity for arbitrary  and , whence and Part (4) is a consequence of Parts (1) and (3). ∎

Thus, starting with a rational function , with a bound on the bidegrees of and , and a bound on the bidegree of a squarefree part of , the first step of the algorithm constructs , with polynomials  and  and


These inequalities give bounds on the degrees in  of the numerator and denominator of .

The rest of the computation depends on the sign of . If , then the degrees in  of and are bounded by , while if , those of and are bounded by . Thus in both cases they are bounded by , where


A squarefree part of the denominator has degree in bounded by . From there, Theorem 10 yields , with


Small branches. It is classical that for a polynomial , the number of its solutions tending to 0 can be read off its Newton polygon. This polygon is the lower convex hull of the union of for such that . The number of solutions tending to 0 is given by the minimal -coordinate of its leftmost points. Since the number of small branches counts only distinct solutions, it is thus given by


The change of variables  changes the coordinates of the point corresponding to  into . This transformation maps the vertices of the original Newton polygon to the vertices of the Newton polygon of the Laurent polynomial . Multiplying by