    # Computing Popov and Hermite forms of rectangular polynomial matrices

We consider the computation of two normal forms for matrices over the univariate polynomials: the Popov form and the Hermite form. For matrices which are square and nonsingular, deterministic algorithms with satisfactory cost bounds are known. Here, we present deterministic, fast algorithms for rectangular input matrices. The obtained cost bound for the Popov form matches the previous best known randomized algorithm, while the cost bound for the Hermite form improves on the previous best known ones by a factor which is at least the largest dimension of the input matrix.

07/18/2017

### A non-commutative algorithm for multiplying 5x5 matrices using 99 multiplications

We present a non-commutative algorithm for multiplying 5x5 matrices usin...
10/09/2020

### Deterministic computation of the characteristic polynomial in the time of matrix multiplication

This paper describes an algorithm which computes the characteristic poly...
03/07/2019

### A Rank-1 Sketch for Matrix Multiplicative Weights

We show that a simple randomized sketch of the matrix multiplicative wei...
06/27/2020

### Generic canonical forms for perplectic and symplectic normal matrices

Let B be some invertible Hermitian or skew-Hermitian matrix. A matrix A ...
01/14/2018

### Fast computation of approximant bases in canonical form

In this article, we design fast algorithms for the computation of approx...
05/30/2017

### Fast Computation of the Roots of Polynomials Over the Ring of Power Series

We give an algorithm for computing all roots of polynomials over a univa...
07/20/2021

### Deterministic Budget-Feasible Clock Auctions

We revisit the well-studied problem of budget-feasible procurement, wher...

## 1. Introduction

In this paper we deal with (univariate) polynomial matrices, i.e. matrices in where is a field admitting exact computation, typically a finite field. Given such an input matrix whose row space is the real object of interest, one may ask for a “better” basis for the row space, that is, another matrix which has the same row space but also has additional useful properties. Two important normal forms for such bases are the Popov form (Popov, 1972) and the Hermite form (Hermite, 1851), whose definitions are recalled in this paper. The Popov form has rows which have the minimal possible degrees, while the Hermite form is in echelon form. A classical generalisation is the shifted Popov form of a matrix (Beckermann and Labahn, 2000), where one incorporates degree weights on the columns: with zero shift this is the Popov form, while under some extremal shift this becomes the Hermite form (Beckermann et al., 1999). We are interested in the efficient computation of these forms, which has been studied extensively along with the computation of the related but non-unique reduced forms (Forney, Jr., 1975; Kailath, 1980) and weak Popov forms (Mulders and Storjohann, 2003).

Hereafter, complexity estimates count basic arithmetic operations in

on an algebraic RAM, and asymptotic cost bounds omit factors that are logarithmic in the input parameters, denoted by . We let be an exponent for matrix multiplication: two matrices in can be multiplied in operations. As shown in (Cantor and Kaltofen, 1991), the multiplication of two polynomials in of degree at most can be done in operations, and more generally the multiplication of two polynomial matrices in of degree at most uses operations.

Consider a square, nonsingular of degree . For the computation of a reduced form of , the complexity was first achieved by a Las Vegas algorithm of Giorgi et al. (Giorgi et al., 2003). All the subsequent work mentioned in the next paragraph achieved the same cost bound, which was taken as a target: up to logarithmic factors, it is the same as the cost for multiplying two matrices with dimensions and degree similar to those of .

The approach of (Giorgi et al., 2003) was de-randomized by Gupta et al. (Gupta et al., 2012), while Sarkar and Storjohann (Sarkar and Storjohann, 2011) showed how to compute the Popov form from a reduced form; combining these results gives a deterministic algorithm for the Popov form. Gupta and Storjohann (Gupta, 2011; Gupta and Storjohann, 2011) gave a Las Vegas algorithm for the Hermite form; a Las Vegas method for computing the shifted Popov form for any shift was described in (Neiger, 2016b). Then, a deterministic Hermite form algorithm was given by Labahn et al. (Labahn et al., 2017), which was one ingredient in a deterministic algorithm due to Neiger and Vu (Neiger and Vu, 2017) for the arbitrary shift case.

The Popov form algorithms usually exploit the fact that, by definition, this form has degree at most . While no similarly strong degree bound holds for shifted Popov forms in general (including the Hermite form), these forms still share a remarkable property in the square, nonsingular case: each entry outside the diagonal has degree less than the entry on the diagonal in the same column. These diagonal entries are called pivots (Kailath, 1980). Furthermore, their degrees sum to , so that these forms can be represented with field elements, just like . This is especially helpful in the design of fast algorithms since this provides ways to control the degrees of the manipulated matrices.

These degree constraints exist but become weaker in the case of rectangular shifted Popov forms, say with . Such a normal form does have columns containing pivots, whose average degree is at most the degree of the input matrix . Yet it also contains columns without pivots, which may all have large degree: up to in the case of the Hermite form. As a result, a dense representation of the latter form may require field elements, a factor of larger than for . Take for example some of degree which is unimodular, meaning that has entries in . Then, the Hermite form of is , and the entries of may have degree in . However, the Popov form, having minimal degree, has size in , just like . Thus, unlike in the nonsingular case, one would set different target costs for the computation of Popov and Hermite forms, such as for the former and for the latter (note that the exponent affects the small dimension).

For a rectangular matrix , Mulders and Storjohann (Mulders and Storjohann, 2003) gave an iterative Popov form algorithm which costs , where is the rank of . Beckermann et al. (Beckermann et al., 2006) obtain the shifted Popov form for any shift by computing a basis of the left kernel of . This approach also produces a matrix which transforms into its normal form and whose degree can be in : efficient algorithms usually avoid computing this transformation. To find the sought kernel basis, the fastest known method is to compute a shifted Popov approximant basis of the matrix above, at an order which depends on the shift. (Beckermann et al., 2006) relies on a fraction-free algorithm for the latter computation, and hence lends itself well to cases where is not finite. In our context, following this approach with the fastest known approximant basis algorithm (Jeannerod et al., 2016) yields the cost bounds for the Popov form and for the Hermite form. For the latter this is the fastest existing algorithm, to the best of our knowledge.

For with full rank and , Sarkar (Sarkar, 2011) showed a Las Vegas algorithm for the Popov form achieving the cost . This uses random column operations to compress into an matrix, which is then transformed into a reduced form. Applying the same transformation on yields a reduced form of

with high probability, and from there the Popov form can be obtained. Lowering this cost further seems difficult, as indicated in the square case by the reduction from polynomial matrix multiplication to Popov form computation described in

(Sarkar and Storjohann, 2011, Thm. 22).

For a matrix which is rank-deficient or has , the computation of a basis of the row space of was handled by Zhou and Labahn (Zhou and Labahn, 2014) with cost . Their algorithm is deterministic, and the output basis has degree at most . This may be used as a preliminary step: the normal form of is also that of , and the latter has full rank with .

We stress that, from a rectangular matrix , it seems difficult in general to predict which columns of its shifted Popov form will be pivot-free. For this reason, there seems to be no obvious deterministic reduction from the rectangular case to the square case, even when is only slightly larger than . Sarkar’s algorithm is a Las Vegas reduction, compressing the matrix to a nonsingular matrix; another Las Vegas reduction consists in completing the matrix to a nonsingular matrix (see Section 3).

In the nonsingular case, exploiting information on the pivots has led to algorithmic improvements for normal form algorithms (Gupta and Storjohann, 2011; Sarkar and Storjohann, 2011; Jeannerod et al., 2016; Labahn et al., 2017). Following this, we put our effort into two computational tasks: finding the location of the pivots in the normal form (the pivot support), and using this knowledge to compute this form.

Our first contribution is to show how to efficiently find the pivot support of . For this we resort to the so-called saturation of computed in a form which reveals the pivot support (Section 4.1), making use of an idea from (Zhou and Labahn, 2013). While this is only efficient for , using this method repeatedly on well-chosen submatrices of with about columns allows us to find the pivot support using operations for any dimensions (Section 4.2).

In our second main contribution, we consider the shifted Popov form of , for any shift. We show that once its pivot support is known, then this form can be computed efficiently (Section 6 and Proposition 6.1). In particular, combining both contributions yields a fast and deterministic Popov form algorithm.

###### Theorem 1.1 ().

For a matrix of degree at most and with , there is a deterministic algorithm which computes the Popov form of  using operations in .

The second contribution may of course be useful in situations where the pivot support is known for some reason. Yet, there are even general cases where it can be computed efficiently, namely when the shift has very unbalanced entries. This is typically the case of the Hermite form, for which the pivot support coincides with the column rank profile of . The latter can be efficiently obtained via an algorithm due to Zhou (Zhou, 2012, Sec. 11), based on the kernel basis algorithm from (Zhou et al., 2012). This leads us to the next result.

###### Theorem 1.2 ().

Let with full rank and . There is a deterministic algorithm which computes the Hermite form of using operations in , where is the minimum of the sum of column degrees of and of the sum of row degrees of .

Using this quantity (see Eq. 6 for a more precise definition), the mentioned cost for the kernel basis approach of (Beckermann et al., 2006) becomes . Thus, when the cost in the above theorem already gains a factor compared to this approach; when is large compared to , this factor becomes .

## 2. Preliminaries

### 2.1. Basic notation

If is an matrix and , we denote by the th column of . If is a set of column indices, is the submatrix of formed by the columns at the indices in . We use analogous row-wise notation. Similarly, for a tuple , then is the subtuple of formed by the entries at the indices in .

When adding a constant to an integer tuple, for example for some , we really mean ; when comparing a tuple to a constant, for example , we mean . Two tuples of the same length will always be compared entrywise: stands for for all . We use the notation , and (note that the latter will mostly be used when has nonnegative entries).

For a given nonnegative integer tuple , we denote by the diagonal matrix with entries .

### 2.2. Row spaces, kernels, and approximants

For a matrix , its row space is the -module generated by its rows, that is, . Then, a matrix is a row basis of if its rows form a basis of the row space of , in which case is the rank of .

The left kernel of is the -module . A matrix is a left kernel basis of if its rows form a basis of this kernel, in which case . Similarly, a right kernel basis of is a matrix whose columns form a basis of the right kernel of .

Given , the set of approximants for at order is the -module of rank defined as

The identity means that the

th entry of the vector

is divisible by , for all .

Two matrices , have the same row space if and only if they are unimodularly equivalent, that is, there is a unimodular matrix such that . For with , and have the same row space exactly when padded with zero rows is unimodularly equivalent to .

### 2.3. Row degrees and reduced forms

For a matrix , we denote by the tuple of the degrees of its rows, that is, .

If has no zero row, the (row-wise) leading matrix of , denoted by , is the matrix in whose entry is equal to the coefficient of degree of the entry of .

For a matrix with no zero row and , we say that is (row) reduced if has full rank. Thus, here a reduced matrix must have full rank (and no zero row), as in (Forney, Jr., 1975). For more details about reduced matrices, we refer the reader to (Wolovich, 1974; Forney, Jr., 1975; Kailath, 1980; Beckermann et al., 2006). In particular, we have the following characterizing properties:

• Predictable degree property (Forney, Jr., 1975) (Kailath, 1980, Thm. 6.3-13): we have

 deg(λR)=max{deg(λi)+rdeg(Ri,∗),1≤i≤m}

for any vector .

• Minimality of the sum of row degrees (Forney, Jr., 1975): for any nonsingular matrix , we have .

• Minimality of the tuple of row degrees (Zhou, 2012, Sec. 2.7): for any nonsingular matrix , we have where the tuples and are the row degrees of and of sorted in nondecreasing order, respectively.

From the last item, it follows that two unimodularly equivalent reduced matrices have the same row degree up to permutation.

For a matrix , we call reduced form of any reduced matrix which is a row basis of . The third item above shows that .

### 2.4. Pivots and Popov forms

For a nonzero vector , the pivot index of is the largest index such that (Kailath, 1980, Sec. 6.7.2). In this case we call the pivot entry of . For the zero vector, we define its degree to be and its pivot index to be . Further, the pivot index of a matrix is the tuple such that is the pivot index of . Note that we will only use the word “pivot” in this row-wise sense.

A matrix is in weak Popov form if it has no zero row and the entries of the pivot index of are all distinct (Mulders and Storjohann, 2003); a weak Popov form is further called ordered if its pivot index is in (strictly) increasing order. A weak Popov matrix is also reduced.

The (ordered) weak Popov form is not canonical: a given row space may have many (ordered) weak Popov forms. The Popov form adds a normalization property, yielding a canonical form; we use the definition from (Beckermann et al., 1999, Def. 3.3):

A matrix is in Popov form if it is in ordered weak Popov form, the corresponding pivot entries are monic, and in each column of which contains a (row-wise) pivot the other entries have degree less than this pivot entry.

For a matrix of rank , there exists a unique which is in Popov form and has the same row space as (Beckermann et al., 2006, Thm. 2.7). We call the Popov form of . For a more detailed treatment of Popov forms, see (Kailath, 1980; Beckermann et al., 1999, 2006).

For example, consider the unimodularly equivalent matrices

 [x2x+122x+22x2]and[x2−x−111x+1x1],

defined over ; the first one is in weak Popov form and the second one is its Popov form. Note that any deterministic rule for ordering the rows would lead to a canonical form; we use that of (Beckermann et al., 1999, 2006), while that of (Kailath, 1980; Mulders and Storjohann, 2003) sorts the rows by degrees and would consider the second matrix not to be normalized.

Going back to the general case, we denote by the pivot index of the Popov form of , called the pivot support of . In most cases, differs from the pivot index of . We have the following important properties:

• The pivot index of is equal to the pivot support if and only if is in ordered weak Popov form.

• For any such that , the pivot index of appears in the pivot support ; in particular each nonzero entry of the pivot index of is in .

For the first item, we refer to (Beckermann et al., 2006, Sec. 2) (in this reference, the set formed by the entries of the pivot support is called “pivot set” and ordered weak Popov forms are called quasi-Popov forms). The second item is a simple extension of the predictable degree property (see for example (Neiger, 2016a, Lem. 1.17) for a proof).

### 2.5. Computational tools

We will rely on the following result from (Zhou et al., 2012, Cor. 4.6 and Thm. 3.4) about the computation of kernel bases in reduced form. Note that a matrix is column reduced if its transpose is reduced.

###### Theorem 2.1 ((Zhou et al., 2012)).

There is an algorithm MinimalKernelBasis which, given a matrix with , returns a right kernel basis of in column reduced form using

 ~O(nω⌈mdeg(\pmat)/n⌉)⊆~O(nωdeg(\pmat))

operations in . Furthermore, .

For the computation of normal forms of square, nonsingular matrices, we use the next result (-Popov forms will be introduced in Section 5; Popov forms as above correspond to ).

###### Theorem 2.2 ((Neiger and Vu, 2017)).

There is an algorithm NonsingularPopov which, given a nonsingular matrix and a shift , returns the -Popov form of using

 ~O(mω⌈|rdeg(\pmat)|/m⌉)⊆~O(mωdeg(\pmat))

operations in .

This is (Neiger and Vu, 2017, Thm. 1.3) with a minor modification: we have replaced the so-called generic determinant bound by a larger quantity (the sum of row degrees), since this is sufficient for our needs here.

## 3. Popov form via completion into a square and nonsingular matrix

We now present a new Las Vegas algorithm for computing the (non-shifted) Popov form of a rectangular matrix with full rank and , relying on algorithms for the case of square, nonsingular matrices. In the case , this results in a cost bounded by , which has already been obtained by the Las Vegas algorithm of Sarkar (Sarkar, 2011); however, the advantage of our approach is that it becomes asymptotically faster if the average row degree of is significantly smaller than .

The idea is to find a matrix such that the Popov form of contains as an identifiable subset of its rows. We will show that if is drawn randomly of sufficiently high degree, then this is true with high probability.

###### Definition 3.1 ().

Let have full rank with and let be the Popov form of . A completion of is any matrix such that:

The next lemma shows that: 1) if is a completion, then will appear as a submatrix of the Popov form of ; and 2) we can easily check from that Popov form whether is a completion or not. The latter is essential for a Las Vegas algorithm.

###### Lemma 3.2 ().

Let have full rank with with Popov form , and let be such that has full rank and . Then, is a completion of if and only if contains a permutation of , where is the Popov form of . In this case, is the submatrix of formed by its rows of degree less than .

###### Proof.

First, we assume that is a completion of . Then is reduced, and therefore it has the same row degree as its Popov form up to permutation. Hence, in particular, contains a permutation of .

Now, we assume that contains a permutation of and our goal is to show that is reduced and contains as a submatrix. Let be the submatrix of of its rows of degree less than ; and be the submatrix of the remaining rows. By assumption, has at least rows and has at most rows. Since is also the Popov form of , there is a unimodular transformation

 (1) [U11U12U21U22][^P1^P2]=[PC].

By the predictable degree property we obtain ; thus, since has full rank , then has exactly rows, and is unimodular. Therefore since both matrices are in Popov form. As a result, is a permutation of . ∎

###### Lemma 3.3 ().

Let have full rank with . Let be finite of cardinality and let with entries chosen independently and uniformly at random from . Then is a completion of with probability at least if is finite and , and at least otherwise.

###### Proof.

Let . We first note that for to be a completion of , it is enough that the matrix

 lm([PC])=[lm(P)lm(C)]=[lm(P)L]∈Kn×n

be invertible. Indeed, this implies first that is reduced; and second, that has no zero row, hence and .

In the case of a finite field with elements, the probability that the above matrix is invertible is . If is infinite or of cardinality , the Schwartz-Zippel lemma implies that the probability that the above matrix is singular is at most . ∎

Thus, if is infinite, it is sufficient to take of cardinality at least to ensure that is a completion with probability at least . On the other hand, if is finite of cardinality , we have the following bounds on the probability:

 n−m∏i=1(1−q−i)>⎧⎪⎨⎪⎩0.28if q=2,0.55if q=3,0.75if q>5.

In Algorithm 1, we first test the nonsingularity of before computing , since the fastest known Popov form algorithms in the square case do not support singular matrices. Over a field with at least elements, a simple Monte Carlo test for this is to evaluate the polynomial matrix at a random and testing the resulting scalar matrix for nonsingularity; this falsely reports singularity only if is divisible by . Alternatively, a deterministic check is as follows. First, apply the partial linearization of (Gupta et al., 2012, Sec. 6), yielding a matrix such that is nonsingular if and only if is nonsingular; ; and . This does not involve arithmetic operations. Since is nonsingular if and only if its kernel is trivial, it then remains to compute a kernel basis via the algorithm in (Zhou and Labahn, 2012), using operations in . Instead of considering the kernel, one could also test the nonsingularity of using algorithms from (Gupta et al., 2012), as explained in (Sarkar, 2011, p. 24).

###### Proposition 3.4 ().

Algorithm 1 is correct and the probability that a failure is reported at Section 3 or Section 3 is as indicated in Lemma 3.3. If NonsingularPopov is the algorithm of (Neiger and Vu, 2017), Algorithm 1 uses

operations in .

Indeed, from Theorem 2.2, Section 3 uses operations where .

While other Popov form algorithms could be used, that of (Neiger and Vu, 2017) allows us to take into account the average row degree of . Indeed, if and , the cost bound above is asymptotically better than .

###### Remark 1 ():

As we mentioned in Section 2.4, the pivot index of is a subset of . Therefore, one can let be zero at all columns where has a pivot, or indices one otherwise knows appear in . If has uneven degrees (e.g. it has the form for some shift , see Section 5.1), then this can be particularly worthwhile. In the case where for some reason we know , then can simply be taken such that

is the identity matrix. In that case,

Algorithm 1 becomes deterministic.

## 4. Computing the pivot support

We now consider a matrix with , possibly rank-deficient, and we focus on the computation of its pivot support . In Section 4.1, we give a deterministic algorithm which is efficient when . In Section 4.2 we explain how this can be used iteratively to efficiently find the pivot support when .

### 4.1. Deterministic pivot support computation via column basis factorization

Our approach stems from the fact (see Lemma 4.2) that is also the pivot support of any basis of the saturation of the row space of (Bourbaki, 1972, Sec. II.§2.4), defined as

 {λ\pmat,λ∈K(x)1×m}∩K[x]1×m.

This notion of saturation was already used in (Zhou and Labahn, 2013) in order to compute column bases of by relying on the following factorization:

###### Lemma 4.1 ((Zhou and Labahn, 2013, Sec. 3)).

Let have rank , let be a right kernel basis of , and let be a left kernel basis of . Then, we have for some column basis of .

One can easily verify that the left kernel of is precisely the saturation of , and therefore the matrix is a (row) basis of this saturation. Here, we are particularly interested in the following consequence of this result:

###### Lemma 4.2 ().

The matrices and in Lemma 4.1 have the same pivot support, that is, .

###### Proof.

Since , the row space of is contained in that of . Hence, by the properties at the end of Section 2.4, as sets. But since and both have rank , both pivot supports have exactly different elements, and must be equal. ∎

We will read off from by ensuring that this matrix is in ordered weak Popov form. First, we obtain a column reduced right kernel basis of using MinimalKernelBasis (see Theorem 2.1). However, the degree profile of prevents us from using the same algorithm to compute a left kernel basis efficiently, since the average row degree of could be as large as . To circumvent this issue, we combine the observations that is bounded and that has small average column degree to conclude that can be efficiently obtained via an approximant basis (see Section 2).

###### Lemma 4.3 ().

Let have rank and let be a right kernel basis of . Then, any left kernel basis of which is in reduced form must have degree at most . As a consequence, if is a reduced basis of , where , then the submatrix of formed by its rows of degree at most is a reduced left kernel basis of .

###### Proof.

Let be a left kernel basis of in reduced form. By Lemma 4.1, for some matrix . Then, the predictable degree property implies that .

For the second claim (which is a particular case of (Zhou and Labahn, 2013, Lem. 4.2)), note that is reduced as a subset of the rows of a reduced matrix. Besides, by construction, hence implies . It remains to show that generates the left kernel of . Indeed, there exists a basis of this kernel which has degree at most , and on the other hand any vector of degree at most in this kernel is in particular in and therefore is a combination of the rows of ; using the predictable degree property, we obtain that this combination only involves rows from the submatrix . ∎

If we compute in ordered weak Popov form, then the submatrix is in ordered weak Popov form as well, and therefore can be directly read off from it. The computation of an approximant basis in ordered weak Popov form can be done via the algorithm of (Jeannerod et al., 2016), which returns one in Popov form.

###### Proposition 4.4 ().

Algorithm 2 is correct and uses operations in .

###### Proof.

Note that we compute the rank of as by the indirect assignment at Section 4.1. Besides, is in ordered weak Popov form since it is a submatrix formed by rows of itself in ordered weak Popov form. This implies that Section 4.1 indeed returns the pivot support of . Then, the correctness directly follows from Lemmas 4.3 and 4.2.

By Theorem 2.1, Section 4.1 costs , where , and . Thus, the sum of the approximation order defined at Section 4.1 is . Then, this step uses operations (Jeannerod et al., 2016, Thm. 1.4). ∎

Note that in this algorithm we do not require that has full rank. The only reason why we assume is because the cost bound for the computation of a kernel basis at Section 4.1 is not clear to us in the case (the same assumption is made in (Zhou et al., 2012)).

Here, it seems more difficult to take average degrees into account than in Algorithm 1. While the average degree of the columns of with largest degree could be taken into account by the kernel basis algorithm of (Zhou et al., 2012), it seems that the computation of via an approximant basis remains in nevertheless.

### 4.2. The case of wide matrices

In this section we will deal with pivots of submatrices , where . To use column indices of in , we introduce for any such the operator satisfying . We abuse notation by applying element-wise to tuples, such as in .

The following simple lemma is the crux of the algorithm:

###### Lemma 4.5 ().

Let , and consider any set of indices . Then with equality whenever .

###### Proof.

If a vector in the row space of is such that , then . This implies since the pivot index of any vector in the row space of (resp. ) appears in (resp. ), see Section 2.4. It also immediately implies the equality whenever . ∎

These properties lead to a fast method for computing the pivot support when , relying on a black box PivotSupport which efficiently finds the pivot support when : one first considers the left columns and uses PivotSupport to compute their pivot support . Then, Lemma 4.5 suggests to discard all columns of in , thus obtaining a matrix . Then, we repeat the same process to obtain etc.

###### Proposition 4.6 ().

Algorithm 3 is correct. It uses at most calls to PivotSupport, each with a submatrix of as input, where . If and PivotSupport is Algorithm 2, then Algorithm 3 uses operations in .

###### Proof.

The correctness follows from Lemma 4.5, and the operation count is obvious. If using Algorithm 2 for PivotSupport, the correctness and cost bound follow from Proposition 4.4. ∎

## 5. Preliminaries on shifted forms

### 5.1. Shifted forms

The notions of reduced and Popov forms presented in Sections 2.4 and 2.3 can be extended by introducing additive integer weights in the degree measure for vectors, following (Van Barel and Bultheel, 1992, Sec. 3): a shift is a tuple , and the shifted degree of a row vector is

 rdegs(p)=max(deg(p1)+s1,…,deg(pn)+sn)=rdeg(pxs),

where . Note that here may be over the ring of Laurent polynomials if ; below, actual computations will always remain over . Note that with we recover the notion of degree used in the previous sections.

This leads to shifted reduced forms for cases where one is interested in matrices whose rows minimize the -degree, instead of the usual -degree. The generalized definitions from Section 2 can be concisely described as follows. For a matrix , its -row degree is . If has no zero row, its -leading matrix is , and the -pivot index and entries of are the pivot index and entries of . The -pivot degree of is the tuple of the degrees of its -pivot entries; this is equal to , where is the -pivot index of and the corresponding subshift.

If has no zero row and , then is in -reduced, -(ordered) weak Popov or -Popov form if has the respective non-shifted form, whenever . Since adding a constant to all the entries of simply shifts the -degree of vectors by this constant, this does not change the -leading matrix or the -pivots, and thus does not affect the shifted forms. Therefore we can extend the definitions of these to also cover with negative entries; one may alternatively assume without loss of generality.

The -Popov form of a matrix is the unique row basis of which is in -Popov form. The -pivot support of is the -pivot index of and is denoted by , where is the rank of . For more details on shifted forms, we refer to (Beckermann et al., 2006).

Computationally, it is folklore that finding the shifted Popov form easily reduces to the non-shifted case: given a matrix and a nonnegative shift , the non-shifted Popov form of has the form , with the -Popov form of . If and the computation of can be carried out in operations, this approach yields in . While this cost is satisfactory whenever , one may hope for improvements especially when . Indeed, Eq. 5 in Lemma 5.1 shows , suggesting the target cost for the computation of .

### 5.2. Hermite form

A matrix with is in Hermite form (Hermite, 1851; MacDuffee, 1933; Newman, 1972) if there are indices such that:

• for and ,

• is monic (therefore nonzero) for ,

• for .

We call the Hermite pivot index of ; note that it is precisely the column rank profile of .

For a matrix , its Hermite form is the unique row basis of which is in Hermite form. We call Hermite pivot support of the Hermite pivot index of . Note that this is also the column rank profile of , since is unimodularly equivalent to (up to padding with zero rows).

For a given , the Hermite form can be seen as a specific shifted Popov form: defining the shift for any , the -Popov form of coincides with its Hermite form (Beckermann et al., 2006, Lem. 2.6). Besides, the -pivot index of is ; in other words, the Hermite pivot support is the column rank profile of .

### 5.3. Degree bounds for shifted Popov forms

The next result states that the unimodular transformation between and its -Popov form only depends on the submatrices of and formed by the columns in the -pivot support. It also gives useful degree bounds for the matrices and ; for a more general study of such bounds, we refer to (Beckermann et al., 2006, Sec. 5).

###### Lemma 5.1 ().

Let have full rank with , let , let be the -Popov form of , and let be the -pivot index of . Then is nonsingular, is its -Popov form, and is the unique unimodular matrix such that .

Furthermore, we have the following degree bounds:

 (2) deg(P