Fast Computation of Shifted Popov Forms of Polynomial Matrices via Systems of Modular Polynomial Equations

by   Vincent Neiger, et al.

We give a Las Vegas algorithm which computes the shifted Popov form of an m × m nonsingular polynomial matrix of degree d in expected O(m^ω d) field operations, where ω is the exponent of matrix multiplication and O(·) indicates that logarithmic factors are omitted. This is the first algorithm in O(m^ω d) for shifted row reduction with arbitrary shifts. Using partial linearization, we reduce the problem to the case d <σ/m where σ is the generic determinant bound, with σ / m bounded from above by both the average row degree and the average column degree of the matrix. The cost above becomes O(m^ωσ/m ), improving upon the cost of the fastest previously known algorithm for row reduction, which is deterministic. Our algorithm first builds a system of modular equations whose solution set is the row space of the input matrix, and then finds the basis in shifted Popov form of this set. We give a deterministic algorithm for this second step supporting arbitrary moduli in O(m^ω-1σ) field operations, where m is the number of unknowns and σ is the sum of the degrees of the moduli. This extends previous results with the same cost bound in the specific cases of order basis computation and M-Padé approximation, in which the moduli are products of known linear factors.


page 1

page 2

page 3

page 4


Rank-Sensitive Computation of the Rank Profile of a Polynomial Matrix

Consider a matrix 𝐅∈𝕂[x]^m × n of univariate polynomials over a field 𝕂....

Fast, deterministic computation of the Hermite normal form and determinant of a polynomial matrix

Given a nonsingular n × n matrix of univariate polynomials over a field ...

A fast, deterministic algorithm for computing a Hermite Normal Form of a polynomial matrix

Given a square, nonsingular matrix of univariate polynomials F∈K[x]^n × ...

Computing Canonical Bases of Modules of Univariate Relations

We study the computation of canonical bases of sets of univariate relati...

Faster Modular Composition

A new Las Vegas algorithm is presented for the composition of two polyno...

Fast computation of approximant bases in canonical form

In this article, we design fast algorithms for the computation of approx...

Fast Computation of Minimal Interpolation Bases in Popov Form for Arbitrary Shifts

We compute minimal bases of solutions for a general interpolation proble...

1 Introduction

In this paper, we consider two problems of linear algebra over the ring of univariate polynomials, for some field : computing the shifted Popov form of a matrix, and solving systems of modular equations.

1.1 Shifted Popov form

A polynomial matrix is row reduced [22, Section 6.3.2] if its rows have some type of minimal degree (we give precise definitions below). Besides, if satisfies an additional normalization property, then it is said to be in Popov form [22, Section 6.7.2]. Given a matrix , the efficient computation of a (row) reduced form of and of the Popov form of has received a lot of attention recently [14, 28, 16].

In many applications one rather considers the degrees of the rows of shifted by some integers which specify degree weights on the columns of , for example in list-decoding algorithms [2, 7], robust Private Information Retrieval [12], and more generally in polynomial versions of the Coppersmith method [9, 10]. A well-known specific shifted Popov form is the Hermite form; there has been recent progress on its fast computation [17, 15, 35]. The case of an arbitrary shift has been studied in [6].

For a shift , the -degree of is ; the -row degree of is with the -degree of the -th row of . Then, the -leading matrix of is the matrix whose entry is the coefficient of degree of .

Now, we assume that and has full rank. Then, is said to be -reduced [22, 6] if has full rank. For a full rank , an -reduced form of is an -reduced matrix whose row space is the same as that of ; by row space we mean the -module generated by the rows of the matrix. Equivalently, is left-unimodularly equivalent to and the tuple sorted in nondecreasing order is lexicographically minimal among the -row degrees of all matrices left-unimodularly equivalent to .

Specific -reduced matrices are those in -Popov form [22, 5, 6], as defined below. One interesting property is that the -Popov form is canonical: there is a unique -reduced form of which is in -Popov form, called the -Popov form of .

Definition 1.1 (Pivot)

Let be nonzero and let . The -pivot index of is the largest index such that . Then we call and the -pivot entry and the -pivot degree of .

We remark that adding a constant to the entries of does not change the notion of -pivot. For example, we will sometimes assume without loss of generality.

Definition 1.2 (Shifted Popov form)

Let , let be full rank, and let . Then, is said to be in -Popov form if the -pivot indices of its rows are strictly increasing, the corresponding -pivot entries are monic, and in each column of which contains a pivot the nonpivot entries have degree less than the pivot entry.

In this case, the -pivot degree of is , with the -pivot degree of the -th row of .

Here, although we will encounter Popov forms of rectangular matrices in intermediate nullspace computations, our main focus is on computing shifted Popov forms of square nonsingular matrices. For the general case, studied in [6], a fast solution would require further developments. A square matrix in -Popov form has its -pivot entries on the diagonal, and its -pivot degree is the tuple of degrees of its diagonal entries and coincides with its column degree.

Problem 1 (Shifted Popov normal form)
Input: the base field , a nonsingular matrix , a shift . Output: the -Popov form of .

Two well-known specific cases are the Popov form [27, 22] for the uniform shift , and the Hermite form [19, 22] for the shift with  [6, Lemma 2.6]. For a broader perspective on shifted reduced forms, we refer the reader to [6].

For such problems involving matrices of degree , one often wishes to obtain a cost bound similar to that of polynomial matrix multiplication in the same dimensions: operations in . Here, is so that we can multiply matrices over a commutative ring in operations in that ring, the best known bound being  [11, 25]. For example, one can compute -reduced [14, 16], -Popov [28], and Hermite [15, 35] forms of nonsingular matrices of degree in field operations.

Nevertheless, may be significantly larger than the average degree of the entries of the matrix, in which case the cost seems unsatisfactory. Recently, for the computation of order bases [30, 34], nullspace bases [36]

, interpolation bases 

[20, 21], and matrix inversion [37], fast algorithms do take into account some types of average degrees of the matrices rather than their degree. Here, in particular, we achieve a similar improvement for the computation of shifted Popov forms of a matrix.

Given , we denote by the generic bound for  [16, Section 6], that is,


where is the set of permutations of , and is defined over as and for . We have , and with and the sums of the row and column degrees of . We note that can be substantially smaller than and , for example if has one row and one column of uniformly large degree and other entries of low degree.

Theorem 1.3

There is a Las Vegas randomized algorithm which solves Problem 1 in expected field operations.

The ceiling function indicates that the cost is when is small compared to , in which case has mostly constant entries. Here we are mainly interested in the case : the cost bound may be written and is both in and .

Ref. Problem Cost bound
[18] Hermite form
[31] Hermite form
[33] Popov & Hermite forms
[1, 2] weak Popov form
[26] Popov & Hermite forms
[14] -reduction
[28] Popov form of -reduced
[17] Hermite form
[16] -reduction
[35] Hermite form
[16][28] -Popov form for any
Here -Popov form for any
Table 1: Fast algorithms for shifted reduction problems (; probabilistic; ).

Previous work on fast algorithms related to Problem 1 is summarized in Table 1. The fastest known algorithm for the -Popov form is deterministic and has cost with ; it first computes a -reduced form of  [16], and then its -Popov form via normalization [28]. Obtaining the Hermite form in was first achieved by a probabilistic algorithm in [15], and then deterministically in [35].

For an arbitrary , the algorithm in [6] is fraction-free and uses a number of operations that is, depending on , at least quintic in and quadratic in .

When is not uniform there is a folklore solution based on the fact that is in -Popov form if and only if is in -Popov form, with and assuming . Then, this solution computes the -Popov form of using [16, 28] and returns . This approach uses operations where , which is not satisfactory when is large. For example, its cost for computing the Hermite form is . This is the worst case since one can assume without loss of generality that  [21, Appendix A].

Here we obtain, to the best of our knowledge, the best known cost bound for an arbitrary shift . This removes the dependency in , which means in some cases a speedup by a factor . Besides, this is also an improvement for both specific cases and when has unbalanced degrees.

One of the main difficulties in row reduction algorithms is to control the size of the manipulated matrices, that is, the number of coefficients from needed for their dense representation. A major issue when dealing with arbitrary shifts is that the size of an -reduced form of may be beyond our target cost. This is a further motivation for focusing on the computation of the -Popov form of : by definition, the sum of its column degrees is , and therefore its size is at most , independently of .

Consider for example for any -reduced and in . Then, taking with , is an -reduced form of for any with ; for some it has size , with arbitrary large independently of .

Furthermore, the size of the unimodular transformation leading from to may be beyond the target cost, which is why fast algorithms for -reduction and Hermite form do not directly perform unimodular transformations on to reduce the degrees of its entries. Instead, they proceed in two steps: first, they work on to find some equations which describe its row space, and then they find a basis of solutions to these equations in -reduced form or Hermite form. We will follow a similar two-step strategy for an arbitrary shift.

It seems that some new ingredient is needed, since for both and the fastest algorithms use shift-specific properties at some point of the process: namely, the facts that a -reduced form of has degree at most and that the Hermite form of is triangular.

As in [17], we first compute the Smith form of and partial information on a right unimodular transformation ; this is where the probabilistic aspect comes from. This gives a description of the row space of

as the set of row vectors

such that for some . Since is diagonal, this can be seen as a system of modular equations: the second step is the fast computation of a basis of solutions in -Popov form, which is our new ingredient.

1.2 Systems of modular equations

Hereafter, denotes the set of nonzero polynomials. We fix some moduli , and for we write if there exists such that . Given specifying the equations, we call solution for any such that .

The set of all such is a -submodule of which contains , and is thus free of rank [24, p. 146]. Then, we represent any basis of this module as the rows of a matrix , called a solution basis for . Here, for example for the application to Problem 1, we are interested in such bases that are -reduced, in which case is said to be an -minimal solution basis for . The unique such basis which is in -Popov form is called the -Popov solution basis for .

Problem 2 (Minimal solution basis)
Input: the base field , moduli , a matrix such that , a shift . Output: an -minimal solution basis for .

Well-known specific cases of this problem are Hermite-Padé approximation with a single equation modulo some power of , and M-Padé approximation [3, 32] with moduli that are products of known linear factors. Moreover, an -order basis for and  [34] is an -minimal solution basis for with .

An overview of fast algorithms for Problem 2 is given in Table 2. For M-Padé approximation, and thus in particular for order basis computation, there is an algorithm to compute the -Popov solution basis using operations, with  [21]. Here, for , we extend this result to arbitrary moduli.

Theorem 1.4

Assuming , there is a deterministic algorithm which solves Problem 2 using field operations, with , and returns the -Popov solution basis for .

We note that Problem 2 is a minimal interpolation basis problem [5, 20] when the so-called multiplication matrix is block diagonal with companion blocks. Indeed, is a solution for if and only if is an interpolant for  [20, Definition 1.1], where is the concatenation of the coefficient vectors of the columns of and is with the companion matrix associated with . In this context, the multiplication defined by as in [5, 20] precisely corresponds to .

In particular, Theorem 1.4 follows from [20, Theorem 1.4] when . If some of the moduli have small degree, we use this result for base cases of our recursive algorithm.

Ref. Cost bound Moduli Particularities
[3, 32] split
[4] partial basis
[30] partial basis,



any returns a single small degree solution
[20] split
[20] any -Popov,
[21] split -Popov
Here any -Popov
Table 2: Fast algorithms for Problem 2 (; partial basis returns small degree rows of an -minimal solution basis; split product of known linear factors).

In the case of M-Padé approximation, knowing the moduli as products of linear factors leads to rewriting the problem as a minimal interpolation basis computation with in Jordan form [5, 20]. Since is upper triangular, one can then rely on recurrence relations to solve the problem iteratively [3, 32, 4, 5]. The fast algorithms in [4, 14, 34, 20, 21], beyond the techniques used to achieve efficiency, are essentially divide-and-conquer versions of this iterative solution and are thus based on the same recurrence relations.

However, for arbitrary moduli the matrix is not triangular and there is no such recurrence in general. Then, a natural idea is to relate solution bases to nullspace bases: Problem 2 asks to find such that there is some quotient with for . More precisely, can be obtained as a -minimal nullspace basis of for the shift .

Using recent ingredients from [17, 21] outlined in the next paragraphs, the main remaining difficulty is to deal with this nullspace problem when . Here, we give a algorithm to solve it using its specific properties: is the column with , and the last entry of is . First, when we show that can be efficiently obtained as a submatrix of the -Popov order basis for and order . Then, when is large compared to and assuming is sorted non-decreasingly, has a lower block triangular shape. We show how this shape can be revealed, along with the -pivot degree of , using a divide-and-conquer approach which splits into two shifts of amplitude about .

Then, for we use a divide-and-conquer approach on which is classical in such contexts: two solution bases and are computed recursively in shifted Popov form and are multiplied together to obtain the -minimal solution basis for . However this product is usually not in -Popov form and may have size beyond our target cost. Thus, as in [21], instead of computing , we use and to deduce the -pivot degree of .

In both recursions above, we focus on finding the -pivot degree of . Using ideas and results from [17, 21], we show that this knowledge about the degrees in allows us to complete the computation of within the target cost.

2 Fast computation of the shifted Popov solution basis

Hereafter, we call -minimal degree of the -pivot degree of the -Popov solution basis for ; coincides with the column degree of this basis. A central result for the cost analysis is that is at most . This is classical for M-Padé approximation [32, Theorem 4.1] and holds for minimal interpolation bases in general (see for example [20, Lemma 7.17]).

2.1 Solution bases from nullspace bases and fast algorithm for known minimal degree

This subsection summarizes and slightly extends results from [17, Section 3]. We first show that the -Popov solution basis for is the principal submatrix of the -Popov nullspace basis of for some .

Lemma 2.1

Let , , with , , and be such that . Then, is the -Popov solution basis for if and only if is the -Popov nullspace basis of for some and . In this case, and has -pivot index .

Let . It is easily verified that is a solution basis for if and only if there is some such that is a nullspace basis of .

Now, having implies that any in the nullspace of satisfies , and since we get . In particular, for any matrix such that , we have . This implies that is in -Popov form if and only if is in -Popov form with -pivot index .

We now show that, when we have a priori knowledge about the -pivot entries of a -Popov nullspace basis, it can be computed efficiently via an -Popov order basis.

Lemma 2.2

Let and let be of full rank. Let be the -Popov nullspace basis for , be its -pivot index, be its -pivot degree, and be a degree bound. Then, let with

Writing for the column degree of , let for and let be the -Popov order basis for and . Then, is the submatrix of formed by its rows at indices .

First, is in -Popov form with . Define whose -th row is if and if : we want to prove .

Let be a row of , and assume . This means for all , so that . Then, for all we have , and from we obtain , which is absurd by minimality of . As a result, componentwise.

Besides, and since has its -pivot entries on the diagonal, it is -reduced: by minimality of , we obtain . Then, it is easily verified that is in -Popov form, hence .

In particular, computing the -Popov nullspace basis , when its -pivot index, its -pivot degree, and are known, can be done in with using the order basis algorithm in [21].

As for Problem 2, with Lemma 2.1 this gives an algorithm for computing and the quotients when we know a priori the -minimal degree of . Here, we would choose : in some cases and this has cost bound , which exceeds our target . An issue is that has size when has columns of large degree; yet here we are not interested in . This can be solved using partial linearization to expand the columns of large degree in into more columns of smaller degree as in the next result, which holds in general for interpolation bases [21, Lemma 4.2].

Lemma 2.3

Let with entries having degrees . Let and . Furthermore, let denote the -minimal degree of .

Writing , let , and for write with and , and let . Define as


and the expansion-compression matrix as


Let and be the -Popov solution basis for . Then, has -pivot degree and the -Popov solution basis for is the submatrix of formed by its rows at indices .

This leads to Algorithm 1, which solves Problem 2 efficiently when the -minimal degree is known a priori.

Algorithm 1 (KnownDegPolModSys)
Input: polynomials , a matrix with , a shift , the -minimal degree of . Output: the -Popov solution basis for . , for , , as in (2), as in (3), the -Popov order basis for and the principal submatrix of Return the submatrix of formed by the rows at indices for

Proposition 2.4

Algorithm KnownDegPolModSys is correct. Writing and assuming , it uses operations in .

By Lemmas 2.3 and 2.1, since and , the -Popov solution basis for is the principal submatrix of the -Popov nullspace basis for , and has -pivot index , -pivot degree , and . Then, by Lemma 2.2, is formed by the first rows of at Step 3, hence is the -Popov solution basis for . The correctness then follows from Lemma 2.3.

Since , has rows and can be computed in operations using fast polynomial division [13]. The cost bound of Step 3 follows from [21, Theorem 1.4] since .

2.2 The case of one equation

We now present our main new ingredients, focusing on the case . First, we show that when the shift has a small amplitude , one can solve Problem 2 via an order basis computation at small order.

Lemma 2.5

Let , , and with . Then, for any , the -Popov solution basis for is the principal submatrix of the -Popov order basis for and , with .

Let denote the -Popov order basis for and , where and . Consider the -Popov nullspace basis of : thanks to Lemma 2.1, it is enough to prove that .

First, we have by choice of , so that implies . Since , this gives . This also shows that the -pivot entries of are located in .

Then, since the sum of the -pivot degrees of is at most , the sum of the -pivot degrees of is at most ; with in -Popov form, this gives . We obtain , so that . Thus, the minimality of and gives the conclusion.

When , this gives a fast solution to our problem. In what follows, we present a divide-and-conquer approach on , with base case .

We first give an overview, assuming is non-decreasing. A key ingredient is that when is large compared to , then has a lower block triangular shape, since it is in -Popov form with sum of -pivot degrees . Typically, if for some then with . Even though the block sizes are unknown in general, we show that they can be revealed efficiently along with by a divide-and-conquer algorithm, as follows.

First, we use a recursive call with the first entries of and of , where is such that is about half of . This reveals the first entries of and the first rows of , with . A central point is that is about half of as well, where is the tail of starting at the entry .

Then, knowing the degrees allows us to set up an order basis computation that yields a residual, that is, a column and a modulus such that we can continue the computation of using a second recursive call, which consists in computing the -Popov solution basis for . From these two calls we obtain , and then we recover using Algorithm 1.

Now we present the details. We fix , with , , the -Popov solution basis for , and its -pivot degree. In what follows, is any permutation of such that is non-decreasing.

Then, for we write for the subtuple of formed by its entries at indices , and for a matrix we write for the submatrix of formed by its rows at indices and columns at indices . The main ideas in this subsection can be understood by focusing on the case of a non-decreasing , taking