Denominator Bounds and Polynomial Solutions for Systems of q-Recurrences over K(t) for Constant K

05/11/2017 ∙ by Johannes Middeke, et al. ∙ Johannes Kepler University Linz 0

We consider systems A_ℓ(t) y(q^ℓ t) + ... + A_0(t) y(t) = b(t) of higher order q-recurrence equations with rational coefficients. We extend a method for finding a bound on the maximal power of t in the denominator of arbitrary rational solutions y(t) as well as a method for bounding the degree of polynomial solutions from the scalar case to the systems case. The approach is direct and does not rely on uncoupling or reduction to a first order system. Unlike in the scalar case this usually requires an initial transformation of the system.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Let be a field of characteristic , and let be not a root of unity. Moreover, let be an indeterminate over . We define an automorphism on the rational function field by setting . We will refer to as the -shift. Note that with this definition we have for all . This is an easy case of a homogeneous or -extension in the sense of (Karr81, ). In the following we will often let

act on (column) vectors in

. This is to be understood as component-wise application of .

We consider the recurrence equation

(rec)

where are square polynomial matrices and is a polynomial vector. We will assume that the matrix has full (left row) rank over the operator ring (see Section 3 for details). Note that any -recurrence system can be brought into this form using unimodular transformations (see, for example, (myPhDthesis, , Lem. 7.5); it is easy to verify that the method discussed there works for general Ore operators – see Section 3 for the definition – and not just differential operators). We will briefly explain the idea at the end of Section 3.

Generally, one major area where recurrence equations arise is symbolic summation. See, for example, (Karr81, ), (SchneiderThesis, ), (AeqlB, ), or (ChyzakSalvy98, ). In particular, note that symbolic simplification of products in (Karr81, ) and (SchneiderThesis, ) is done within so-called homogeneous or -extension of which our field is a special case. Examples which deal explicitly with -hypergeometric sums are (PauleRiese:97, ) or (Riese:03, ).

Our motivating goal is to find new algorithms to compute all rational solutions of (rec); that is, we want to know all such that the equation holds for that .

A first step will be to determine a so-called denominator bound: we are looking for a polynomial such that for every possible rational solution . There are comparatively easy ways to determine all the factors in except for powers of ; cf., for example, (AbramovBarkatou1998, ), (AbramovKhmelnov2012, ), (CPS:08, ), or (SchneiderMiddeke2017, ). See (Karr81, ), (Bron:00, ) or (SchneiderThesis, ) for a discussion on why this distinction is necessary. Thus, in this work we will only concentrate on finding a bound on the powers of in in a computationally cheap way. The next step will then be to compute polynomial solutions of the system. For this, we are computing a bound on the degree of polynomial solutions . Once such a bound is determined, a solution to (rec) can be found by making an ansatz with unknown coefficients and solving the resulting linear system over .

Methods for determining such a denominator bound (or universal denominator as some authors prefer to call it), have already been discussed in the literature. See, for example, (AbramovBarkatou1998, ), (Barkatou1999, ), (CPS:08, ), or (AbramovKhmelnov2012, ). Most of these approaches rely on the transformation of the system into first order; the method in (AbramovKhmelnov2012, ) seems to be the only one which deals with systems directly (albeit for shift recurrences and not the -case; although it might be possible to adapt the method using ideas from (Abramov2002, )). Another way of determining a denominator bound would be to decouple the system using cyclic vector methods, the Danilevski-Barkatou-Zücher algorithm (see (Danilevski37, ), (Barkatou1993, ), (Zurcher94, ), or (BCP13, )), or Smith–Jacobson form computations.

In this paper, however, we want to avoid uncoupling and conversion to first order systems as these often introduce an additional cost and also tend to blur the structure of the input system. Instead we aim at computing the denominator bound and degree bound directly from the matrices of the higher order system (rec). Our hope here is that this will lead to more efficient algorithms.

This paper is structured as follows: In Section 2, we discuss how the powers of in the denominator can be bounded and how we may determine a bound for the degree of polynomial solutions. The results will require certain determinants not to be identically zero. Since this of course cannot be guaranteed for an arbitrary system, we need to develop an algorithm which transforms a given system into an equivalent one where the method of Section 2 is applicable. This is done in Section 4 which constitutes the major result of this paper. The section also includes a comparison with competing methods. In between, Section 3 recalls some basic facts about operators and matrix normal forms which are needed for the transformation.

2. Bounding the Power of the Indeterminate and the Degree of Polynomial Solutions

Consider (rec) where are matrices in and where is a column vector. For technical reasons the usefulness of which will become apparent in Section 4, we consider the slightly modified system

(rec’)

where , that is, where we allow the right hand side to be a rational vector but only with powers of in the denominator.

We start by considering possible denominators. The following method is folklore. Similar ideas can be found, for example, in (Karr81, ). Let be an arbitrary solution of (rec’) and let be the denominator of where . Using an (entry-wise) partial fraction decomposition, we may write where and where or (that is, does not divide every entry of ). In the first case, is a candidate for the power of in the degree bound. In the other case, substituting this into (rec’) yields

since for all . We may transform the equation into

Note that the right hand side is in , that is, a polynomial vector. That means that also the left hand side must be polynomial, and this in turn implies that either or else that divides Assume that the latter case holds. Then, since , we also have for all (or else ). Hence, cannot divide . We obtain

In particular, the coefficient vector of in the sum must vanish.

Write now with being constant matrices for all ; and write with . From the assumption that we know that in particular . Using these representations in the sum above, we find that the constant term is

Now, is just a matrix and since we see that must have a non-trivial kernel. This implies that the determinant of vanishes. We may regard as a polynomial expression in . Therefore, assuming that does not identically vanish, we can obtain candidates for by checking if it has roots of the form .

This observation motivates the following definition.

Definition 2.1 (-tail regular).

We call the system (rec’) -tail regular if is not the zero polynomial.

Summarising the argumentation so far, we obtain Theorem 2.2 below. It remains to deal with the case of systems which are not -tail regular. We will do so in Section 4.

Theorem 2.2 ().

Assume that the system (rec’) is -tail regular. If is a rational solution of (rec’) and if divides the denominator of , then

  1. ,

  2. or ,

  3. or is a root of

Consequently, we obtain a bound on by taking the maximum of those values. ∎

If happens to be of the form where is a computable field and is transcendental over , then the method of (AbramovPaulePetkovsec, , Ex. 1) can be applied to find all roots of of the required form. In the case that , we can use the method of (BauerPetkovsek1999, , Sect. 3.3) for that.

Example 2.3 ().

Let and . We consider the system

Here, . Computing the polynomial yields

The roots of are and . That yields the upper bound for . Note that this fits the actual solutions well which are generated by

(We can easily check that the two vectors are indeed solutions; and – after row reduction – (AbramovBarkatou2014, , Thm. 6) gives the dimension of the solution space as .)

We turn our attention to polynomial solutions now. The approach is rather similar to the degree bounding and has been discussed in the literature before, for example, in (AbramovPaulePetkovsec, , Sect. 2). Assume that is a solution of (rec’). We make an ansatz

where . We assume that , that is, that is the degree of . In addition, let with ; and write once more with for all . We assume that and that for at least one . Substituting all of this into (rec’) gives

If the right hand side contains negative powers of , then there cannot be a polynomial solution. Else, the degree (in ) of the left hand side is at most . We compute the coefficient of on both sides. This yields (since and )

There are two cases:

  1. Either (that is, ),

  2. or .

In the first case, we do already have a bound on (namely ). In the second case, the right hand side of the equation for the coefficient of is zero. Just as for the denominator bound, we see that the left hand side contains a matrix with a non-trivial kernel. Again, its determinant is a polynomial expression—this time in . We make an analogous definition.

Definition 2.4 (-head regular).

We call the system (rec’) -head regular if the polynomial is not identically zero.

The computations above have show that for a -head regular system is a root of . Thus, we can again obtain all the candidates for the degree of by computing the roots of . The following theorem summarises these findings.

Theorem 2.5 ().

Assume that the system (rec’) is -head regular. Let be the maximum of and let . If is a polynomial solution of degree , then

  1. either ,

  2. or is a root of

As in Theorem 2.2, we obtain a finite number of candidates for .

Example 2.6 ().

We continue Example 2.3. We have already seen that rational solutions have the shape for some .111We did not explicitly show that the denominator of does not have any other factors apart from – however, this is the case here since we do know the complete solution space. Substituting this into the equation and clearing denominators yields

The maximal degree in is and thus, we have

Thus, Theorem 2.5 is not applicable here. We will revisit the example in Section 4.

3. Ore Operators and Matrix Normal Forms

This section is devoted to introducing some concepts which are necessary in order to deal with the cases or . This section introduces the algebraic model which we are going to employ as well as some necessary background information and other results.

In Section 4, we will try to transform the system (rec’) into an equivalent form such that the new or becomes non-zero. During this, we are allowed to use the following transformations:

  1. multiply a row of the system by a rational function in ;

  2. add a -linear combination of shifted versions of one row to another row.

We will apply the transformations in such a way that the coefficients on the left hand side of (rec’) will always be polynomial matrices. However, the right hand side might contain rational functions in .

It will be convenient to express the transformations using the language of Ore operators. Roughly speaking, these are a class of non-commutative polynomials. They are named after their first introduction by Ø. Ore in (Ore33, ); a first interpretation of them as operators seems to be (JacPLT, ). Ore operators have been used to model differential or difference equations (see (BP96, ) for an introduction) and have been applied to problems in symbolic summation (cf. (ChyzakSalvy98, )). Implementations exist for Maple (for example, (AbramovLeLi2005, )) and Mathematica (for example, (KoutschanPhD, )).

We will only require a special case where the derivative is the zero map. These rings are sometimes also referred to as skew polynomials. For us, this is simply the (non-commutative) ring

which consist of polynomial expressions in with coefficients being polynomials in . The addition in this ring is the same as for regular polynomials. The multiplication, however, is governed by the commutation rule

(and consequently in the extensions which we introduce below). We will usually refer to the elements of as operators. These operators act on in the usual way: Denoting the action of on by we have and .

We will also need to consider a related ring where we allow rational functions in in the coefficients of our operators. The corresponding ring may be constructed from since the non-zero polynomials in form a so-called (left or right) Ore set in (for a definition see, for example, (cohnRT, , p. 177)). We have . The action of on can easily be extended to an action of .

We may now associate the system (rec’) with a matrix writing it as

Doing so, we may identify the transformations above as multiplication of the system by a certain matrix from the left. More precisely, each of the allowed transformations correspond to multiplication by a unimodular matrix , that is, a matrix which has an inverse in the same matrix ring. We denote the set of unimodular matrices by . Note that transforms the system (rec’) into

where both the left and the right hand side are modified. It is easy to see that the solutions of the original and the transformed system are exactly the same if is unimodular. We will call the two systems (and by extension also the matrices and ) equivalent.

We would like to point out that being unimodular is a stronger property than merely having full (left row) rank where we understand the rank as the maximal number of linearly independent rows over – see also (BeckermannChengLabahn2006, , Def. 2.1, Thm. A.2). For instance, the -by- matrix has full rank, but no inverse in . We will call a matrix of full rank regular in order to distinguish this case.

Before we are able to propose an algorithm for transforming the system (rec’) into an equivalent system which is -tail regular or -head regular, we need to introduce one more concept. First, we would like to point out that the subring of may actually be regarded as a commutative polynomial ring because commutes with all elements of . Therefore, it makes sense to look at matrix normal forms for polynomial matrices. We will again use the terms unimodular and regular, this time referring to matrices which have an inverse in or which have full rank over , respectively.

The normal form which we are going to employ here is the so-called Popov normal form. The most succinct way to describe it is using Gröbner bases for modules. See (myPhDthesis, ) for a detailed explanation; or see (Villard96, ) or Popov’s original paper (Popov72, ) for a more traditional definition. See, for example, (AdamsLoustaunau1994, ) for more details on Gröbner bases over modules. We introduce the term over position ordering on the polynomial module by saying that if or and . Here, denotes the th unit vector. We say that a matrix is in Popov normal form if its rows form a minimal Gröbner basis with respect to . It is possible to prove that for every matrix there exists a unimodular matrix over such that222That is, here unimodularity means that the inverse of is a polynomial matrix .

where is in Popov normal form. It can further be shown that a matrix in Popov normal form has maximal row rank. Also, using Gröbner basis division (for each row of ) we can write every matrix as where and where has no rows which are reducible by the Popov normal form  (again in the Gröbner basis sense). This offers an algorithmic way to determine whether is a left multiple of . We are going to need this property in Algorithm 4.1 and Algorithm 4.5 below.

We would like to point out, that despite its definition in terms of Gröbner bases, the Popov normal form can easily be computed in polynomial time. In fact, even the naive method which is based on row reduction will be polynomial (cf. (myPhDthesis, , Lem. 5.14)). See, for example, (BeckermannLabahnVillard2006, , Cor. 6.1) for a faster method.

It is possible to use other row equivalent forms instead of the Popov normal form; as long as they have a similar division property. One possible example is the Hermite normal form which is a Gröbner basis with respect to the position over term ordering (see, for example, (myPhDthesis, ) for a details on why the Hermite normal form is a Gröbner basis). However, to our best knowledge, with current methods computing a Popov normal form can be done more efficiently than computing a Hermite normal form.

We briefly recall the reasoning of (myPhDthesis, , Lem 7.5) to explain how the Popov normal form can also be used to remove redundancies from a system: Starting with a general matrix – that is, with a matrix which is not necessarily square or of full row rank – we first compute the column Popov normal form of . It is defined analogously but for right modules of column vectors instead of left modules of row vectors. Moreover, algorithms to compute the (row) Popov normal form and the corresponding transformation matrix can easily be translated to the column Hermite normal form. We obtain a representation

where and has full column rank. Next, we compute the row Popov normal form of which yields

where and has full row rank. Since the row transformations do not change the column rank and since (left) row and (right) column rank coincide by (cohn, , Thm. 8.1.1), we conclude that must be square and of full row rank.

Now, as and are both unimodular, the system has a solution if and only if the system has a solution and where

(with the blocks matching those of ). Thus, either the compatibility condition does not hold and we know that the system is not solvable; or it suffices to concentrate on the system which is of the correct form for the method presented in this paper. After solutions have been found, we can easily translate them into solutions of the original system using the matrix . (The vector does not have any conditions imposed on it and contributes free variables to the solution .)

It is not necessary to employ the Popov normal form in order to remove the redundancies of the system. Any pair of rank revealing column and row transformation would be sufficient. For instance, we could use the Hermite normal form. Since we do not need the division property, also row and column reduction (see (BeckermannChengLabahn2006, ) for a definition and an efficient algorithm) or EG-eliminations (see (Abramov:99, )) could be used.

4. Desingularising the Determinants

With the notation and concepts from Section 3, we are finally prepared to deal with the case of identically vanishing determinants in Section 2. This section here will introduce an algorithm for transforming any system into an equivalent one (using elementary row transformations over ) where or do not vanish. This constitutes the main result of this paper.

We write system (rec’) in operator form where is an operator matrix as explained in Section 3. We express as

where are matrices in alone. We will call the -trailing matrix of and – assuming that – we will call the -leading matrix of . It is easy to see that (using the notation of Section 2) we have

In other words, and are the same as the determinants of and – except that we used instead of as the name for the variable. (As explained in Section 3, can be interpreted as univariate commutative polynomial ring.) Thus, the task of transforming the system into an equivalent one (using a unimodular multiplier) with or non-zero can be equivalently described as the task of having the -trailing or the -leading matrix being regular.

The definition of the -trailing matrix in particular is also the reason why we allowed a denominator for the right hand side of (rec’). Should

be the zero matrix, then we simply divide the entire equation by a suitable power of

in order rectify that problem.

There are similar works which consider recurrence systems and place requirements on certain leading or trailing matrices. For instance, see the method in (AbramovKhmelnov2012, ) or the concept of strong row-reduction in (AbramovBarkatou2014, , Def. 4). There is, however, an essential difference between those approaches and the one presented here: We consider the leading and trailing matrices with respect to  as the main variable while those other approaches consider to be the main variable (and are consequently dealing with leading and trailing matrices that are in ).

The choice of variable does make a big difference. While we can easily do a non-commutative version of row-reduction (that is, working over ) in order to work on the leading coefficient with respect to , say; simply switching the variables would catapult us into . The later ring, however, contains arbitrary denominators in and therefore does not have a natural action on which extends the action of . (To see that let and . If we want to compute , then we have to find such that . However, not all equations of that form have rational solutions.)

We will thus need to develop a different approach which does not require division by expressions in .

Also note that our algorithm bears some similarities to EG-eliminations as described in (Abramov:99, ). In particular, the idea to reduce the trailing matrix and to shift down coefficients from the higher matrices whenever a row of becomes zero are the same. The difference is again the choice of the main variable. The transformations used during EG-eliminations are unimodular over (the Laurent skew polynomials333These are well-defined since the powers of form an Ore set in – see, for example, (cohnRT, , Chptr. 5)) and do thus not alter the solutions; however switching the main variable does again expose us to arbitrary fractions in .

The need to keep all transformations unimodular over also means that we cannot proceed row by row as (Abramov:99, ) does where the -trailing or -leading matrix is brought into trapezoidal form one row at a time. Instead we always have to consider the entire -trailing or -leading matrix for each elimination step. In particular, we cannot force the “-width” (as defined in (Abramov:99, )) of the lower rows to decrease.

We are going to deal with the -trailing matrix first. Below we state an algorithm which we claim will transform the system into the desired shape. Note that while we do not explicitly compute the transformation matrix , this could be easily done by applying all the transformations done to

in parallel to the identity matrix

. Alternatively, we may also apply them directly to the right hand side of (rec’).

Algorithm 4.1 ().
Input::

A regular matrix .

Output::

An equivalent matrix such that the -trailing matrix of is regular.

Procedure::
  1. Compute the -trailing matrix of . If , then return .

  2. Else, compute the Popov normal form of and let be the corresponding transformation matrix and let be the rank of .444That is, the number of nonzero rows. Set and write the new trailing matrix of as

    where is in Popov normal form.

  3. Let and set .555This shifts the lower rows of by . The new trailing matrix is now

    with the same as before and some .

  4. Use Gröbner bases reduction of by trying to eliminate all the rows in . Write the result as with a matrix and a matrix . Let

    and set .

  5. If , then continue the inner loop with Step 3. Else, go back to the outer loop in Step 1.

We would like to point out that for any matrix we have for some since is the -shift. Thus, in Step 2 of Algorithm 4.1 the product has the form for some matrices . In other words, acts on the -trailing matrix of without interference from the other parts. Consequently, if after applying the transformation by the lower rows of the -trailing matrix become zero, this means that the lower rows of the entire matrix must be divisible by . This explains why the transformation by in Step 3 will still result in a matrix . A similar reasoning as in Step 2 also holds true for Step 4: again the transformation of by acts on the -trailing matrix without disturbance from the other coefficients. Its effect is to replace the lower block of by .

Note that this reasoning only works because of our special choice of and would not be possible with more general automorphisms. In particular, the algorithm is not applicable for the normal shift case.

Theorem 4.2 ().

Algorithm 4.1 is correct and terminates.

Before we can prove the theorem, we state two simple remarks:

Remark 4.3 ().

As long as the inner loop runs, we always consider the trailing matrix

and compute a matrix (if possible) such that . (The inner loop terminates if that is not possible any more.) We then apply the following transformations to

Thus, we see that the transformation matrix for a single step has the form

for some (where we write in the lower right block instead of ). Moreover,

for all and matrices . Thus, we easily see that all transformation matrices in the inner loop must be of this shape.

Remark 4.4 ().

Let have full (left) row rank. We want to prove that has a (two-sided) inverse in the quotient skew field . We first remark that we can embed into the non-commutative Euclidean domain . Over , we can compute the Jacobson normal form

of where are unimodular (see, for example, (cohn, , Thm. 8.1.1)). The diagonal entries cannot be zero since otherwise did not have full row rank. Thus, the inverse of exists. We obtain

Since is the product of (from both sides) invertible matrices, we conclude that is invertible.

Proof of Theorem 4.2.

If the algorithm terminates, then the trailing matrix is in Popov normal form which implies that it has full rank (over ) which in turn implies that as a polynomial in .

It remains to reason about the termination of the algorithm. First, we note that the outer loop starting in Step 1 cannot be reached infinitely often: A new Popov normal form is only computed if the lower rows of are not in the row space of the upper block. This implies that we either gain new nonzero rows in the Popov normal form or that the degrees or positions of the leading monomials decrease. In both cases, the module generated by the rows of becomes strictly larger which can happen only finitely often since is noetherian.

Second, we have to show that the inner loop starting in Step 3 cannot be run infinitely often. For this, assume that the matrix

with and has full (row) rank and assume that the inner loop repeats infinitely, that is, assume that for every there are and such that

(where we write instead of for the lower left block). We explained why the transformation matrices always look like this in Remark 4.3 above.

We now form the quotient (skew) field of . Since has full row rank, it does have a (two-sided) inverse by Remark 4.4. Thus, the equation above is equivalent to the identity

where we write with and . In particular, we have the identity Note that does not depend on , that is, the denominator in of is the same for every . In addition, is a polynomial matrix, that is, its denominator is always . Thus, the denominator on the left hand side is bounded. However, the denominator of the right hand side is not. This is a contradiction. Hence, there cannot be infinitely many steps where is in the row space of . ∎

An analogous method works for the -leading matrix. The only differences to Algorithm 4.1 are that we have to work with the Popov normal form of the -leading matrix in Step 3 and that we shift the lower rows by instead of in Step 3. Again, we state the algorithm without explicit computation of the transformation matrix.

Algorithm 4.5 ().
Input::

A regular matrix .

Output::

An equivalent matrix such that the -leading matrix of is regular.

Procedure::
  1. Compute the -leading matrix of . If , then return .

  2. Else, compute the Popov normal form of and let be the corresponding transformation matrix and let be the rank of . Set and write the new trailing matrix of as

    where is in Popov normal form.

  3. Let and set .666This shifts the lower rows of by . The new trailing matrix is now

    with the same as before and some matrix .

  4. Use Gröbner bases reduction of by trying to eliminate all the rows in . Write the result as with a matrix and a matrix . Let

    and set .

  5. If , then continue the inner loop with Step 3. Else, go back to the outer loop in Step 1.

Theorem 4.6 ().

Algorithm 4.5 is correct and terminates.

Proof.

The proof is mostly analogous to that of Theorem 4.2. The only noticeable change is that for the termination of the inner loop we have to deal with transformation matrices of the shape

(with some ) instead of having fractions in . However, we come to an analogous equation where the degree of the left hand side is bounded while the degree of the right hand side is not. (It is easy to check that the degree in of during execution of Algorithm 4.5 is always equal to ). Thus, we arrive at a similar contradiction as for the proof of Theorem 4.2. ∎

Example 4.7 ().

We continue Example 2.6. The -leading matrix of the system was

which is not regular. Thus, we will apply Algorithm 4.5. The Popov normal form of the -leading matrix is simply

(Actually, for the proper Popov normal form we would need to divide the first row by ; however, we want to avoid fractions in order to save some space.) We obtain the new system matrix

Next, we multiply the lower row by which leads to the new -leading matrix

which does not have full rank yet. Gröbner basis reduction amounts to adding the upper row to the lower. We obtain the new system matrix

It turns out that we have to shift the lower row by three times before the -leading matrix changes again (since the lower row is only linear in ). The final result is the system matrix

with -leading matrix

This yields with roots and . Thus, the degree of polynomial solutions is at most . From Example 2.3 we do know that the solution space is spanned by and ; so the bound is actually sharp in this case.

5. Conclusions

In this paper we have presented a method for determining degree bounds for the polynomial solutions and for determining the power of in the denominator bound for rational solutions of -recurrence systems. Although the translation from the scalar case seems to be straight-forward at first, we quickly discovered a problem when certain determinants are vanishing. This required us to develop a new method for transforming the system into an equivalent form where those determinants are non-zero.

There exist other ways of obtaining the same information, as for instance uncoupling or transformation of the system into first order. However, those approaches are often computationally costly (since uncoupling is and since conversion to first order usually introduces a lot of new variables). The method presented in this paper does avoid those problems.

We do not have strict bounds on the complexity of the transformation algorithm yet; we hope to deliver those in the near future. However, in all the examples which we have computed, termination usually occurred within a very small number of steps. Moreover, the degree bounds found were usually tight. Although we did not yet carry out extensive comparisons with the existing methods mentioned above, we are therefore confident that this method will prove to be useful in practical applications.

References

  • (1) Sergei A. Abramov, Eg-eliminations, Journal of Difference Equations and Applications 5 (1999), 393–433.
  • (2) Sergei A. Abramov, A direct algorithm to compute rational solutions of first order linear -difference systems, Discrete Mathematics (2002), no. 246, 3–12.
  • (3) Sergei A. Abramov and Moulay A. Barkatou, Rational solutions of first order linear difference systems, ISSAC’98, 1998.
  • (4) Sergei A. Abramov and Moulay A. Barkatou, On solution spaces of products of linear differential or difference operators, ACM Communications in Computer Algebra 48 (2014), no. 4, 155–165.
  • (5) Sergei A. Abramov and E. Khmelnov, D.  Denominators of rational solutions of linear difference systems of an arbitrary order, Programming and Computer Software 38 (2012), no. 2, 84–91.
  • (6) Sergei A. Abramov, Q. Le, H.  and Ziming Li, Univariate Ore polynomial rings in computer algebra, Journal of Mathematical Sciences 131 (2005), no. 5, 5885–5903.
  • (7) Sergei A. Abramov, Peter Paule, and Marko Petkovšek, -hypergeometric solutions of -difference equations, Discrete Mathematics (1998), no. 180, 3–22.
  • (8) William W. Adams and Philippe Loustaunau, An introduction to Gröbner bases, Graduate Studies in Mathematics, vol. 3, American Mathematical Society, 1994.
  • (9) Moulay Barkatou, An algorithm for computing a companion block diagonal form for a system of linear differential equations, Appl. Algebra Engrg. Comm. Comput. 4 (1993), no. 3, 185–195.
  • (10) by same author, Rational solutions of matrix difference equations. problem of equivalence and factorization, Proceedings of ISSAC (Vancouver, BC), 1999, pp. 277–282.
  • (11) Andrej Bauer and Marko Petkovšek, Multibasic and mixed hypergeometric Gosper-type algorithms, Journal of Symbolic Computation 28 (1999), no. 4–5, 711–736.
  • (12) Bernhard Beckermann, Howard Cheng, and George Labahn, Fraction-free row reduction of matrices of Ore polynomials, Journal of Symbolic Computation 41 (2006), 513–543.
  • (13) Bernhard Beckermann, George Labahn, and Gilles Villard, Normal forms for general polynomial matrices, Journal of Symbolic Computation (2006), no. 41, 708–737.
  • (14) Alin Bostan, Frédéric Chyzak, and Élie de Panafieu,

    Complexity estimates for two uncoupling algorithms

    , Proceedings of ISSAC’13 (Boston), June 2013.
  • (15) Manuel Bronstein, On solutions of linear ordinary difference equations in their coefficient field, J. Symbolic Comput. 29 (2000), no. 6, 841–877.
  • (16) Manuel Bronstein and Marko Petkovšek, An introduction to pseudo-linear algebra, Theoretical Computer Science (1996), no. 157, 3–33.
  • (17) William Y. Chen, Peter Paule, and Husam L. Saad, Converging to Gosper’s algorithm, Adv. in Appl. Math. 41 (2008), no. 3, 351–364. MR 2449596
  • (18) Frédéric Chyzak and Bruno Salvy, Non-commutative elimination in Ore algebras proves multivariate identities, Journal of Symbolic Computation 26 (1998), no. 2, 187–227.
  • (19) Paul Moritz Cohn, Free rings and their relations, 2 ed., Monographs, no. 19, London Mathematical Society, London, 1985.
  • (20) by same author, Introduction to ring theory, Springer Undergraduate Mathematics Series, Springer-Verlag, London, 2000.
  • (21) M. Danilevski, A.  The numerical solutions of the secular equation (russian), Matem. Sbornik 44 (1937), no. 2, 169–171.
  • (22) Nathan Jacobson,

    Pseudo-linear transformations

    , The Annals of Mathematics 38 (1937), no. 2, 484–507.
  • (23) Michael Karr, Summation in finite terms, Journal of the Association for Computing Machinery 28 (1981), no. 2, 305–350.
  • (24) Christoph Koutschan, Advanced applications of the holonomic systems approach, Ph.D. thesis, Johannes Kepler University, September 2009.
  • (25) Johannes Middeke, A computational view on normal forms of matrices of Ore polynomials, Ph.D. thesis, Johannes Kepler University, Linz, July 2011, Research Institute for Symbolic Computation (RISC).
  • (26) Øystein Ore, Theory of non-commutative polynomials, The Annals of Mathematics 34 (1933), no. 3, 480–508.
  • (27) Peter Paule and A. Riese, A Mathematica q-analogue of Zeilberger’s algorithm based on an algebraically motivated aproach to -hypergeometric telescoping, Special Functions, q-Series and Related Topics (M. Ismail and M. Rahman, eds.), vol. 14, AMS, 1997, pp. 179–210.
  • (28) Marko Petkovšek, Herbert S. Wilf, and Doron Zeilberger, , A K Peters, April 1996, https://www.math.upenn.edu/ wilf/AeqB.html.
  • (29) Vasile Mihai Popov, Invariant description of linear, time-invariant controllable systems, SIAM Journal on Control 10 (1972), no. 2, 252–264.
  • (30) A. Riese, qMultiSum – A package for proving -hypergeometric multiple summation identities, Journal of Symbolic Computation 35 (2003), 349–377.
  • (31) Carsten Schneider, Symbolic summation in difference fields, Ph.D. thesis, Research Institute for Symbolic Computation / Johannes Kepler University, May 2001, published as Technical report no. 01-17 in RISC Report Series.
  • (32) Carsten Schneider and Johannes Middeke, Waterloo workshop on computer algebra, ch. Denominator Bounds for Systems of Recurrence Equations using -Extensions, submitted, 2017, https://arxiv.org/abs/1705.00280.
  • (33) Gilles Villard, Computing Popov and Hermite forms of polynomial matrices, Proceedings of ISSAC’96 (Zurich, Switzerland), ACM, 1996, pp. 250–258.
  • (34) B. Zürcher, Rationale Normalformen von pseudo-linearen Abbildungen, Master’s thesis, ETH Zürich, 1994.