A fast, deterministic algorithm for computing a Hermite Normal Form of a polynomial matrix

02/05/2016 ∙ by George Labahn, et al. ∙ University of Waterloo 0

Given a square, nonsingular matrix of univariate polynomials F∈K[x]^n × n over a field K, we give a fast, deterministic algorithm for finding the Hermite normal form of F with complexity O^∼(n^ωd) where d is the degree of F. Here soft-O notation is Big-O with log factors removed and ω is the exponent of matrix multiplication. The method relies of a fast algorithm for determining the diagonal entries of its Hermite normal form, having as cost O^∼(n^ωs) operations with s the average of the column degrees of F.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

For a given square, nonsingular matrix polynomial there exists a unimodular matrix such that , a matrix in (column) Hermite normal form. Thus

is a lower triangular matrix where each is monic and deg for all . Other variations include specifying row rather than column forms (in which case the unimodular matrix multiplies on the left rather than the right) or upper rather than lower triangular form. The Hermite form was first defined by Hermite in 1851 in the context of triangularizing integer matrices.

There has been considerable work on fast algorithms for Hermite form computation. This includes algorithms from Hafner and McCurley [9] and Iliopoulos [10] which control intermediate size by working modulo the determinant. Hafner and McCurley [9], Storjohann and Labahn [15] and Villard [16] gave new algorithms which reduced the cost to operations, with being the exponent of matrix multiplication. The second named worked with integer matrices but the results carried over directly to polynomial matrices. Mulders and Storjohann [13] gave an iterative algorithm having complexity , thus reducing the exponent of at the cost of increasing the exponent of the degree .

During the past decade the goal has been to give an algorithm that computes the Hermite form in the time it takes to multiply two polynomial matrices having the same size and degree as the input matrix, namely at a cost . Such algorithms already exist for a number of other polynomial matrix problems. This includes probabalistic algorithms for linear solving [13], row reduction [6] and polynomial matrix inversion [11] and later deterministic algorithms for linear solving and row reduction [7]. In the case of Hermite normal form computation Gupta and Storjohann [8] gave a Las Vegas randomized algorithm with expected running time of . Their algorithm was the first to be both softly cubic in and linear in .

One natural technique for finding a Hermite form is to first determine a triangular form and to then reduce the lower triangular elements using the diagonals. The problem with this is that the best a-priori bounds for the degrees of the unimodular multiplier can become too large for efficient computation (since these bounds are determined from ). On the other hand simply looking for bounds on has a similar problem since the best known a-priori bound for the -th column is and hence the sum of these degree bounds is , a factor of larger than the actual sum . Gupta and Storjohann make use of the Smith normal form of in order to obtain accurate bounds for the degrees of the diagonal entries (and hence the degrees of the columns) of . That combined with some additional partial information on one of the right multipliers of this Smith form are then used to find .

In this paper we give a deterministic Hermite norml form algorithm having complexity . As with Gupta and Storjohan ours is a two step process. We first determine the diagonal elements of and then secondly find the remaining elements having reduced degrees. Our approach is to make use of fast, deterministic methods for shifted minimal kernel basis and column basis computation to find the diagonal entries. We do this without the need for finding the associated unimodular multiplier. We do this with a cost field operations where is the average of the column degrees of . The remaining entries are then determined making use of a second type of fast shifted minimal kernel basis computation with special care required to reduce the computation to one having small degrees. The use of shifted minimal kernel bases for matrix normal form computation was previously used in [4, 5] in order to obtain efficient algorithms in the case where intermediate coefficient growth is a concern.

The remainder of this paper is organized as follows. In the next section we give preliminary information for shifted degrees, kernel and column bases of polynomial matrices. Section 3 then contains the algorithm for finding the diagonal elements of a Hermite form with the following section giving the details of the fast algorithm for the entire Hermite normal form computation. The paper ends with a conclusion and topics for future research.

2 Preliminaries

In this section we first describe the notations used in this paper, and then give the basic definitions and properties of shifted degree, kernel basis and column basis for a matrix of polynomials. These will be the building blocks used in our algorithm.

2.1 Shifted Degrees

Our methods makes use of the concept of shifted degrees of polynomial matrices [4]

, basically shifting the importance of the degrees in some of the rows of a basis. For a column vector

of univariate polynomials over a field , its column degree, denoted by , is the maximum of the degrees of the entries of , that is,

The shifted column degree generalizes this standard column degree by taking the maximum after shifting the degrees by a given integer vector that is known as a shift. More specifically, the shifted column degree of with respect to a shift , or the -column degree of is

where

For a matrix , we use and to denote respectively the list of its column degrees and the list of its shifted -column degrees. When , the shifted column degree specializes to the standard column degree. Similarly, is equivalent to deg for all and , that is, bounds the row degrees of .

The shifted row degree of a row vector is defined similarly as

Shifted degrees have been used previously in polynomial matrix computations and in generalizations of some matrix normal forms [5]. The shifted column degree is equivalent to the notion of defect commonly used in the literature.

Along with shifted degrees we also make use of the notion of a matrix polynomial being column (or row) reduced. A matrix polynomial is column reduced if the leading column coefficient matrix, that is the matrix

has full rank. A matrix polynomial is -column reduced if is column reduced. A similar concept exists for being shifted row reduced.

The usefulness of the shifted degrees can be seen from their applications in polynomial matrix computation problems [17, 21]. One of its uses is illustrated by the following lemma from [18, Chapter 2], which can be viewed as a stronger version of the predictable-degree property [12, page 387]. For completeness we also include the proof.

Lemma 1.

Let be a -column reduced matrix with no zero columns and with . Then a matrix has -column degrees if and only if .

Proof.

Being -column reduced with is equivalent to the leading coefficient matrix of having linearly independent columns. The leading coefficient matrix of has no zero column if and only if the leading coefficient matrix of

has independent columns. That is, has column degree if and only if has column degree . ∎

An essential fact needed in this paper, also based on the use of shifted degrees, is the efficient multiplication of matrices with unbalanced degrees [21, Theorem 3.7].

Theorem 1.

Let with , a shift with entries bounding the column degrees of and , a bound on the sum of the entries of . Let with and the sum of its -column degrees satisfying . Then we can multiply and with a cost of , where is the average of the entries of .

2.2 Kernel Bases and Column Bases

The kernel of is the -module

with a kernel basis of being a basis of this module. Formally, we have:

Definition 1.

Given , a polynomial matrix is a (right) kernel basis of if the following properties hold:

  1. is full-rank.

  2. satisfies .

  3. Any satisfying can be expressed as a linear combination of the columns of , that is, there exists some polynomial vector such that .

It is easy to show that any pair of kernel bases and of are unimodularly equivalent.

A -minimal kernel basis of is just a kernel basis that is -column reduced.

Definition 2.

Given , a polynomial matrix is a -minimal (right) kernel basis of if is a kernel basis of and is -column reduced. We also call a -minimal (right) kernel basis of a -kernel basis.

A column basis of is a basis for the -module

Such a basis can be represented as a full rank matrix whose columns are the basis elements. A column basis is not unique and indeed any column basis right multiplied by a unimodular polynomial matrix gives another column basis.

The cost of kernel basis computation is given in [21] while the cost of column basis computation is given in [20]. In both cases they make heavy use of fast methods for order bases (often also referred to as minimal approximant bases) [2, 6, 17].

Theorem 2.

Let with . Then a -kernel basis can be computed with a cost of field operations where is the average column degree of .

Theorem 3.

There exists a fast, deterministic algorithm for the computation of a column basis of a matrix polynomial having complexity field operations in with being the average column degree of .

2.3 Example

Example 1.

Let

be a matrix over having column degree . Then a column space, , and a kernel basis, , of are given by

For example, if denote the columns of then column of - denoted by - is given by

Here with shifted leading coefficient matrix

Since has full rank we have that is a -minimal kernel basis. ∎

3 Determining the Diagonal Entries of a Hermite Normal Form

In this section we first show how to determine only the diagonal entries of the Hermite normal form of a nonsingular input matrix with having column degrees . The computation makes use of fast kernel and column basis computation.

Consider unimodularly transforming to

(1)

After this unimodular transformation, the elimination of the top right block of , the matrix is now closer to being in Hermite normal form. Applying this procedure recursively to and , until the matrices reach dimension , gives the diagonal entries of the Hermite normal form of .

While such a procedure can be used to correctly compute the diagonal entries of the Hermite normal form of , a major problem is that the degree of the unimodular multiplier can be too large for efficient computation. Our approach is to make use of fast kernel and column basis methods to efficiently compute only and and so avoid computing .

Partition , with and consisting of the upper and lower rows of , respectively. Then both upper and lower parts are of full-rank since is assumed to be nonsingular. By partitioning , where the column dimension of matches the row dimension of , then becomes

Notice that the matrix is nonsingular and is therefore a column basis of . As such this can be efficiently computed as mentioned in Theorem 3. In order to compute , notice that the matrix is a right kernel basis of , which makes the top right block of zero.

The following lemma states that the kernel basis can be replaced by any other kernel basis of to give another unimodular matrix that also works.

Lemma 2.

Partition and suppose is a column basis of and a kernel basis of . Then there is a unimodular matrix such that

where . If is square nonsingular, then and are also square nonsingular.

Proof.

This follows from Lemma 3.1 in [20]

Note that we do not compute the blocks represented by the symbol , which may have very large degrees and cannot be computed efficiently.

lem:oneStepHermiteDiagonal allows us to determine and independently without computing the unimodular matrix. This procedure for computing the diagonal entries gives Algorithm alg:hermiteDiagonal. Formally the cost of this algorithm is given in Theorem 4.

0:   is nonsingular.
0:   a list of diagonal entries of the Hermite normal form of .
1:  Partition , with consists of the top rows of ;
2:  if then return ; endif;
3:  ;
4:  ;
5:  ;
6:  
7:  return ;
Algorithm 1
Example 2.

Let

working over . If denotes the top three rows of then a column basis and kernel basis were given in Example 1. If denotes the bottom rows of , then this gives diagonal blocks and as

Recursively computing with gives column space and nullspace for the top rows, as

This in turn gives which gives the two diagonal blocks from . As is triangular we have triangularized . Similarly as is already in triangular form we do not need to do any extra work. As a result we have that is unimodularly equivalent to

giving the diagonal elements of the Hermite form of . ∎

Theorem 4.

Algorithm alg:hermiteDiagonal costs field operations to compute the diagonal entries for the Hermite normal form of a nonsingular matrix , where is the average column degree of .

Proof.

The three main operations are computing a column basis of , computing a kernel basis of , and multiplying the matrices . Set , a scalar used to measure size for our problem.

For the column basis computation, by Theorem 3 (see also [20, Theorem 5.6]) we know that a column basis of can be computed with a cost of By [20, Lemma 5.1] the column degrees of the computed column basis are also bounded by the original column degrees . Similarly, from Theorem 2 (see also [21, Theorem 4.1]), computing a -minimal kernel basis of also costs operations.

By Theorem 3.4 of [21] we also know that the sum of the -column degrees of the output kernel basis is bounded by . For the matrix multiplication , we have that the sum of the column degrees of and the sum of the -column degrees of are both bounded by . Therefore Theorem 1 (see also [21, Theorem 3.7]) applies and the multiplication can be done with a cost of .

If we let the cost of Algorithm alg:hermiteDiagonal be for a input matrix of dimension then

As depends on we use with not depending on . Then we solve the recurrence relation as

4 Hermite Normal Form

In sec:diagonals, we have shown how to efficiently determine the diagonal entries of the Hermite normal form of a nonsingular input matrix . In this section we show how to determine the remaining entries for the complete Hermite form of . Our approach is similar to that used in the randomized algorithm of [8] but does not use the Smith normal form. Our algorithm has the advantage of being both efficient and deterministic.

4.1 Hermite Form via Minimal Kernel Bases

For simplicity, let us assume that is already column reduced, something which we can do with complexity using the column basis algorithm of [20]. Assume that has column degrees and that are the degrees of the diagonal entries of the Hermite form . Let and and set , a vector with entries. The following lemma implies that we can obtain the Hermite normal form of from a -minimal kernel basis of .

Lemma 3.

Suppose is a -minimal kernel basis of , partitioned as where each block is square. Then is unimodular while has row degrees and is unimodularly equivalent to the Hermite normal form of .

Proof.

Let be the unimodular matrix satisfying . By the predictable-degree property (1), has -column degrees bounded by , that is, . Letting we have that is a kernel basis of having shifted column degree . Thus is -column reduced and hence is a -minimal kernel basis of .

The -minimal kernel basis of is then unimodularly equivalent to . Thus the matrix , making up the upper rows, is unimodular and the matrix , consisting of the lower rows, is unimodularly equivalent to . Minimality also ensures that , implying . Similarly, the minimality of then ensures that , or equivalently, . ∎

Knowing that the bottom rows have the same row degrees as and is unimodularly equivalent with implies that the Hermite form can then be obtained from using Lemma 8 from [8]. We restate this as follows:

Lemma 4.

Let be a column basis of a matrix having the same row degrees as . If is the matrix putting into reduced column echelon form and if is in Hermite normal form then is the principal submatrix of .

Proof.

This follows from the fact that and all have uniform column degrees . This allows their relationship to be completely determined by the leading coefficient matrices of both and , respectively. ∎

Example 3.

Let

be a nonsingular matrix over . As the leading column coefficient matrix is nonsingular, is column reduced. Using the method of Section 3 one can determine that the diagonal elements of the Hermite form of are , respectively. Using [21] one computes a kernel basis for as

With shift we have that

which has full rank and hence is a -minimal kernel basis. If

denote the first and last three rows of , respectively, then is unimodular, and has the same row degrees as the Hermite form. Then

is the matrix which converts the leading coefficient matrix of into column echelon form. Therefore is unimodular and

is in Hermite normal form. ∎

4.2 Reducing the degrees and the shift

Although the Hermite normal form of can be computed from a -minimal kernel basis of , a major problem here is that can be very large and hence is not efficient enough for our needs. In this subsection we show how to convert our normal form computation into one where the bounds are smaller.

Since we know the row degrees of we can expand each of the high degree rows of into multiple rows each having lower degrees. This allows us to compute an alternative matrix having lower degrees, but with a higher row dimension that is still in , with easily determined from this new . Such an approach was used in the Hermite form computation algorithm in [8] and also in the case of the order basis algorithms from [19, 17]. Our task here is in fact easier as we already know the row degrees of .

For each entry of the shift , let and be the quotient and remainder of divided by . Any polynomial of degree at most can be written as

with each of the components, , having degree at most . Similarly the th column of can be written in terms of vector components each bounded in degree by .

In terms of matrix identities let

where is the th column of the identity matrix. For each has entries and combines into a single matrix

with column dimension Then there exists a matrix polynomial such that . The shift in this case is given by

Since is column reduced we have that and hence the column dimension satisfies

Rather than recovering from a -minimal kernel basis for the following lemma implies that we can now recover the Hermite form from a -minimal kernel basis of .

Lemma 5.

Let be a -kernel basis, partitioned as with of dimension . Let be the submatrix consisting of the columns of whose -column degrees are bounded by . Then is a column basis of having -column degrees .

Proof.

Let be the unimodular matrix satisfying and the matrix expanded according to . Then has row degrees and . Set

where is a -kernel basis. Then is a