On computing root polynomials and minimal bases of matrix pencils

10/28/2021
by   Vanni Noferini, et al.
0

We revisit the notion of root polynomials, thoroughly studied in [F. Dopico and V. Noferini, Root polynomials and their role in the theory of matrix polynomials, Linear Algebra Appl. 584:37–78, 2020] for general polynomial matrices, and show how they can efficiently be computed in the case of matrix pencils. The staircase algorithm implicitly computes so-called zero directions, as defined in [P. Van Dooren, Computation of zero directions of transfer functions, Proceedings IEEE 32nd CDC, 3132–3137, 1993]. However, zero directions generally do not provide the correct information on partial multiplicities and minimal indices. These indices are instead provided by two special cases of zero directions, namely, root polynomials and vectors of a minimal basis of the pencil. We show how to extract, starting from the block triangular pencil that the staircase algorithm computes, both a minimal basis and a maximal set of root polynomials in an efficient manner. Moreover, we argue that the accuracy of the computation of the root polynomials can be improved by making use of iterative refinement.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/23/2017

Computing and Using Minimal Polynomials

Given a zero-dimensional ideal I in a polynomial ring, many computations...
01/21/2020

On mu-Symmetric Polynomials

In this paper, we study functions of the roots of a univariate polynomia...
02/06/2021

Algorithms for Linearly Recurrent Sequences of Truncated Polynomials

Linear recurrent sequences are those whose elements are defined as linea...
12/05/2019

Complexity of a Root Clustering Algorithm

Approximating the roots of a holomorphic function in an input box is a f...
12/15/2017

On the minimal ranks of matrix pencils and the existence of a best approximate block-term tensor decomposition

Under the action of the general linear group, the ranks of matrices A an...
11/22/2018

Fast Algorithms for Computing Eigenvectors of Matrices via Pseudo Annihilating Polynomials

An efficient algorithm for computing eigenvectors of a matrix of integer...
04/09/2020

The Weierstrass root finder is not generally convergent

Finding roots of univariate polynomials is one of the fundamental tasks ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Finding the eigenvalues of a polynomial matrix

and their partial multiplicities is a problem that occurs naturally when one wants to describe the solution set of particular matrix equations involving polynomial matrices [12, 14, 16, 17]. From the theoretical point of view, the problem is completely solved by the existence of a local Smith form at any point [12]. Computationally, the local Smith form can be determined by finding certain important polynomial vectors associated with , namely, left and right minimal bases [11, 19] and root polynomials [9, 12, 20]. It has been long known that vectors in a minimal basis carry the information on the minimal indices [11]. That root polynomials have a similarly important role has been recently advocated: for example, so-called maximal sets of root polynomials encode all the information on partial multiplicities [9]

, and they make it possible to properly define eigenvectors for singular polynomials

[9], which has proved useful for instance to carry out probabilistic studies of the condition number of eigenvalues [18].

This motivates a natural algorithmic question: how can one stably compute the vectors in a minimal basis and a maximal set of root polynomials of a polynomial matrix? This paper focuses on the case of , i.e., matrix pencils, and addresses the question by providing a robust algorithm which builds on the staircase algorithm [23]. There are at least two reasons to give special attention to pencils: first, the generalized eigenvalue problem is arguably the most common instance of polynomial eigenvalue problems; and second, even if the eigenvalue problem has higher degree to start with, it is common practice to linearize it as a first step towards its numerical solution. For many commonly used linearizations, it is known both how to recover minimal bases (a topic addressed by many papers in the literature, each focusing on different classes of linearizations: see for example [6, 7, 8, 21]) and how to recover maximal sets of root polynomials (a problem solved in [9] for many classes of linearizations) for the linearized polynomial matrix, starting from their counterparts for the linearizing pencils. Hence, in this precise sense an algorithm that solves the problem for pencils can be easily extended to an algorithm that computes root polynomials and minimal bases for a polynomial matrix of any degree.

The structure of the paper is as follows. In Section 2 we recall the necessary background and definitions for zero directions, root polynomials and minimal bases of an arbitrary polynomial matrix. In Section 3 we consider the special case of a matrix pencil and show the link between zero directions, the rank profile of certain bidiagonal block Toeplitz matrices, and the construction of a so-called Wong sequence of subspaces. We also recall how the staircase algorithm for pencils of matrices constructs particular bases for such a Wong sequence. In Section 4 we show how a particular bidiagonalization procedure allows us to extract from this a maximal set of root polynomials, on one hand, and a minimal basis for the right null space, on the other hand. In Section 5 we then develop simple recurrences that compute, for a given pencil, a minimal basis for the right null space, and a maximal complete set of -independent root polynomials. We end with a few concluding remarks in Section 6.

2 Background and definitions

2.1 Zeros, the local Smith form and null spaces

A finite zero, or eigenvalue, of is an element such that

The structure at a point which is a zero of a polynomial matrix is defined via the local Smith form of the polynomial matrix at the point :

(1)

and where and are polynomial and invertible at , whereas is the normal rank of . Furthermore, the integers are known as the partial multiplicities, or structural indices, of at the zero ; they satisfy . The finite sequence , or rather its subsequence listing its positive elements, is sometimes also called the Segré characteristic at . The classical algorithm for the computation of the above decomposition is based on the Euclidean algorithm and Gaussian elimination over the ring of polynomials, which is in general numerically unreliable [5]. For this reason it can be replaced by a technique, based on the expansion around the point [25], as explained in the next sections.

Other important sets of indices of a general polynomial matrix are related to its right nulspace and left nullspace , which are rational vector spaces over the field of rational functions in . For this, we first need the following definition.

Definition 2.1

The columns of a polynomial matrix of normal rank is called a minimal polynomial basis if the sum of the degrees of its columns, called the order of the basis, is the minimal among all bases of span . Its ordered column degrees are called the minimal indices of the basis.

A minimal basis is not uniquely determined by the subspace it spans, but it was shown in [11] that the minimal indices are. If we define the right nullspace and the left nullspace of an polynomial matrix of normal rank as the vector spaces of rational vectors and annihilated by

then the minimal indices of any minimal polynomial basis for these spaces, are called the right and left minimal indices of . Their respective dimensions are and and the respective indices are denoted by

It was also shown in [11] that, for any minimal basis , the constant matrix has full column rank for all and the highest column degree matrix of (or equivalently the columnwise reversal of evaluated at [22]) also has full column rank.

2.2 Zero directions, root polynomials and null vectors

Let us assume that is a zero of and let us express the latter polynomial matrix by its Taylor expansion around  :

Let us then define for the following associated Toeplitz matrices and their ranks as

where we implicitly have set the coefficients for . Below, we will drop the suffix when it is obvious from the context that we use an expansion about that point. The rank increments were shown in [25] to completely determine the partial multiplicities of , and we can thus expect that the definition of root polynomials also should be related.

Throughout this paper, we extend the usual notation of modular arithmetic from scalars to matrices by applying it elementwise. Namely, given a scalar polynomial and two polynomial matrices of the same size, say, , then the notation is shorthand to mean that there exists a third polynomial matrix such that . Therefore, for example, .

Let now and be polynomial vectors satisfying

then for , we say that the vectors and , respectively, are right and left zero directions of order if

(2)
(3)

The fact that the respective vectors and are nonzero avoids trivial solutions [10, 16, 24] obtained by multiplication with a positive power of . Similarly, one can define right zero directions of order at as polynomial vectors such that and ; left zero directions of order are defined analogously.

Using the expansion (1) we can rewrite (2) and (3) as

So the problem of finding zero directions is apparently solved by computing the null spaces of these Toeplitz matrices [24]. Unfortunately, though, knowledge of zero directions alone is not sufficient to extract the information about the minimal indices and the partial multiplicities at the point . However, there are two important subsets of zero directions of a polynomial matrix that allow us to do this, and thus deserve more attention. For this we restrict ourselves to the right zero directions since the problem for the left zero directions is obtained by just considering the conjugate transposed matrix .

For a singular polynomial matrix the set of zero directions also contains the polynomial vectors in the right nullspace . This follows easily from the fact that if is a column of a minimal basis matrix , then and for any . It is thus a zero direction of order for any point . Moreover, if these zero directions are indeed the vectors of a minimal bases, their degrees provide all the information about the minimal indices.

Another special subset of zero directions is the set of root polynomials at a point . Root polynomials and their properties were studied in detail in [9], where it was advocated that they deserve an important role in the theory of polynomial matrices; they had previously appeared as technical tools for proving other results [12, 20]. Below we give a definition which is clearly equivalent to that given in [9], but rephrased in a way more convenient for us in this paper.

Definition 2.2

Let the columns of be a right minimal basis of ; then

  • is a root polynomial of order if it is a zero direction of order and has full column rank

  • is a set of -independent root polynomials of orders if they are zero directions of orders and has full column rank.

  • a -independent set is complete if there does not exist any larger -independent set

  • such a complete set is ordered if

  • such a complete ordered set is maximal if there is no root polynomial of order at such that has full column rank, for all .

The importance of maximal sets of root polynomials is given by the following result [9, Theorem 4.1.3]:

Theorem 2.3

Let the nonzero partial multiplicities at of be and suppose that are a complete set of root polynomials at for . Then, the following are equivalent:

  1. are a maximal set of root polynomials at for ;

  2. the orders of such a set are precisely ;

  3. the sum of the orders of such a set is precisely .

If the polynomial matrix is regular, then any zero direction is a root polynomial, and the definition of a maximal set can be applied to them too. However, in the singular case, generally zero directions do not provide the correct information on minimal indices and partial multiplicities. We illustrate this fact with the next simple example.

Example 2.4

The polynomial matrix

has a unique right minimal index, equal to , and a unique nonzero partial multiplicity at , equal to . Any right minimal basis has the form

although the minimal basis is not unique, any has degree and thus encodes corretly the information on the minimal index. Similarly, it is not hard to check that any root polynomial has the form

Any such root polynomial also forms a maximal set, as can be checked by the definition; in spite of the arbitrariness of the polynomials , the order is always since

and hence, in accordance to the theory, it corresponds to the partial multiplicity of the eigenvalue . On the other hand, a generic zero direction may have an order that does not correspond to any minimal index or partial multiplicity. Indeed, let be any nonnegative integer, then

and thus is a zero direction of order . In other words, in the case of singular polynomials the zero directions do not necessarily provide the correct information on the partial multiplicities.

The discussion above emphasizes that maximal sets of root polynomials are important bases that enclose the information on partial multiplicities, in a similar manner to how minimal bases enclose the information on minimal indices. (Although, unlike for minimal bases, the information on the partial multiplicity is not given by the degree but by the order.) A relevant question is therefore how to compute a maximal set of root polynomials, given a polynomial matrix and one point . If the calculation is performed numerically, it is also of interest to investigate the stability of any proposed algorithm.

A common approach to solve polynomial eigenvalue problems is to first linearize them: a pencil is called a linearization of if there exist unimodular (that is, invertible over ) matrices such that . If is a linearization of , then all the finite zeros of and have the same nonzero partial multiplicities. In [9, Section 8], it was shown that for a very broad class of classical linearizations of polynomial matrices, including for example companion matrices, and vector spaces of linearizations, Fiedler pencils, and block Kronecker linearizations, it is very easy to ricover a maximal set of root polynomials at for from a maximal set of root polynomials at for its linearization : indeed, extracting a certain block suffices in all those cases. For more details, see in particular [9, Theorem 8.5], [9, Theorem 8.9] and [9, Theorem 8.10].

For this reason, a robust algorithm for the computation of a maximal set of root polynomials at for a pencil would immediately yield a robust algorithm for the computation of a maximal set of root polynomials at for any polynomial matrix, namely:

  1. Linearize via one of the linearizations for which recovery of maximal sets is described in [9, Section 8], say, ;

  2. Compute a maximal set of root polynomials at for ;

  3. Extract a maximal set of root polynomials at for .

This justifies a peculiar computational focus on the pencil case. The goal of this paper is to derive an algorithm for step 2. in the above, and to comment on the numerical stability of this algorithm’s subroutines; our algorithm can in addition also compute minimal bases, which for many linearizations can also in turn be used to determine the minimal bases of the linearized polynomial matrix [6, 7, 8, 21]. In future work, we plan to investigate algorithms that work direclty on the polynomial matrix , and compare them with the approach described above.

3 Zero directions of pencils

As discussed above, the case of pencils is especially important because the existence of an algorithm to compute maximal sets of root polynomials and minimal bases for pencils immediately implies the existence of a general algorithm. We start in this section by considering the computation of zero directions: then, we will show how to extract root polynomials and minimal bases from them.

Finding the zero directions of an pencil is a simpler problem than its analogue for a higher degree polynomial matrix, since one can use the generalized Schur form of an arbitrary pencil which is related to the Kronecker canonical form. Since the expansion of the pencil around is again a pencil we can assume without loss of generality that the eigenvalue we are interested in is .

As a first step, we recall results from the literature concerning certain block Toeplitz matrices [10, 15, 25]. The first two allow us to retrieve the right minimal indices and the left minimal indices of the pencil of normal rank .

Theorem 3.1 ([15, 16])

Let us denote the right nullity of the bidiagonal Toeplitz matrix

by , and set , and let be the number of right minimal indices of the pencil equal to . Then

Theorem 3.2 ([15, 16])

Let us denote the left nullity of the bidiagonal Toeplitz matrix

by , and set , and let be the number of left minimal indices of the pencil equal to . Then

The third block Toeplitz result specializes the result mentioned earlier and relates to the elementary divisors at the eigenvalue .

Theorem 3.3 ([25])

Let us denote the rank of the bidiagonal Toeplitz matrix

by , and set , then the number of elementary divisors of degree is given by

One could in principle use the above Toeplitz matrices to compute the indices , and and construct from these minimal bases and root polynomials, but this would be very inefficient. The output of the staircase algorithm [23] applied to and in fact can be used to retrieve all the information to find these polynomial vectors, as well as their degrees or orders. However, while the minimal indices and the partial multiplicities can be read out directly from the staircase form, the computation of a maximal set of root polynomials and a minimal basis requires some extra work, which is the subject of this paper; this is not dissimilar to what happens after having computed the Schur form of a matrix, from which the eigenvalues can be read directly while computing eigenvectors requires some further computational effort. There exists a staircase form for reconstructing both the left and right root polynomials and minimal bases, but since they are just the conjugate transpose of each other, we restrict ourselves here to the right case.

It is shown in [23] that there always exist unitary transformations and (these will be real and orthogonal when the system and are real) such that :

(4)

where
(i) the matrices are of dimension and of full row rank ,
(ii) the matrices are of dimension and of full column rank ,
(iii) is of full column rank.
Hence, it follows [23] that

and that the leading diagonal pencil has as structural elements associated with its local Smith form at

right minimal indices equal to , and

elementary divisors of degree at .

The transformations and are chosen to be unitary (or orthogonal in the real case) for reasons of numerical stability. The construction of the transformations is via the staircase algorithm, which recursively constructs growing othonormal bases and for the so-called Wong chains (or sequences) [4, 26] defined as follows :

Here we have borrowed from [22] the following notation to indicate how a matrix acts on a vector space:

In other words, is the image of under the transformation represented (in the canonical basis) by and is the pre-image of under the transformation represented (in the canonical basis) by . Moreover, it is known [4, 22, 23, 26] that these spaces are nested and have the following dimensions and orthonormal bases :

(5)

Moreover, since these are constant transformations, they only transform the coordinate system in which we have to construct the coefficients of the zero directions and . The above form (4) is appropriate for constructing , but there exists a dual form where the role of columns and rows is interchanged, and which can thus be used to construct . Below we focus on finding solutions in the coordinate system of (4), but this is no loss of generality as the corresponding zero directions of are easily seen to be (see also [9, Proposition 3.2] for a more general result on root polynomials, and note that its proof can be adapted to show an analogous results on any zero direction).

As a next step, in Section 4, we will further simplify the pencil , eventually reaching a form that allows us to (a) reduce the problem of computing a maximal set of root polynomials to the case of a regular pencil with only eigenvalues at and (b) reduce the problem of computing a minimal basis to the case of a pencil having only right minimal indices.

4 Extracting the null space and root polynomials

Although the Wong chains described in the previous section are uniquely defined as nested subspaces, the corresponding bases are not unique. It was shown in [1] that an appropriate updating of the transformations and in (4) makes sure that the “stairs” and have the following quasi-triangular form

(6)

where and are both upper triangular and invertible. Such a form can be obtained by updating the transformations and to and , using block-diagonal unitary matrices

Note that these transformations have to be constructed backwards, starting with  :

then for

It is worth noting that the block columns of the updated transformations and are stil orthonormal bases for the nested Wong spaces defined earlier. We have only updated the choice of basis vectors. One can then separate the right null space blocks from the structure at the eigenvalue 0 by using the following result

Lemma 4.1

Let be the leading principal subpencil of (4) and assume it is in staircase form with “stairs” in the special form (6),

(7)

indicating that this subpencil has only zero eigenvalues and right minimal indices. Then there exist unit upper triangular transformations and that eliminate all blocks except the rank carrying stairs :

(8)

without altering these stairs, and hence puts the pencil in block-bidiagonal form.

Proof. The transformations and can be constructed recursively as a product of unit upper triangular transformations. Indeed we can work upwards from block row till block row 1, each time eliminating first the elements in by row transformations, then the elements in by column transformations. The precise order of elimination is

The row transformations use full column rank matrices under them as pivot. The column transormations use the full row rank matrices to the left of them as pivot.     

In the proof above, the order in which the zero blocks of and are created is crucial in order to avoid destroying previousy created zero blocks. For this reason it is necessary that this recurrence runs backwards. A permuted version of this lemma was already shown in [23], without insisting that this only requires unit upper triangular transformation matrices. By choosing unit upper triangular matrices for this elimination, we can interpret it as the back-substitution for solving a linear system of equations in the unknowns and  :

(9)

Let us write the matrices , , and as follows

where and are the submatrices of and that are kept in the bidiagonal pencil. Then the equations (9) can be rewritten as

where the submatrices

are to be eliminated form the right hand sides, and the submatrices

are the unknowns and have as many nonzeros as the blocks they are supposed to eliminate. Their block structure follows from the block structure of the pivot blocks, as indicated in the proof of Lemma 4.1. This system of equations is therefore invertible and we can apply iterative refinement [13] to make the result numerically stable in a particular sense. Since there is no factorization to be performed for the iterative refinement, its computational cost is very reasonable. Also the choice of transformation is such that .

We point out that the Wong sequences are now also spanned by the growing subblocks of the transformation matrices and but these bases are no longer orthonormal.

Corollary 4.2

The matrices and have non orthonormal block columns that still span the nested Wong spaces defined in (5)

It is also useful to point out that the staircase form index sets and allow us to separate the pencil in their two separate structures, namely the right minimal indices and the Jordan structure at the eigenvalue . Below is an example of a special case of (7) illustrating this separation for the dimensions

The red blocks correspond to the right minimal indices and the blue blocks correspond to the Jordan structure at 0; the symbol denots a complex number that is allowed to be nonzero while the symbol denotes a complex number that is guaranteed to be nonzero. The sequence of indices for these two structures are

and

and are obtained by starting from and then equating (for decreasing )

Unfortunately, these blocks are not completely decoupled in this coordinate system. This can be cured by a mere reordering of rows and columns (which is equivalent to multiplying by permutation matrices on the left and on the right) to separate the (red) Kronecker block structure from the (blue) Jordan structure.

Moreover, applying the same reordering on the bidiagonal pencil described in Lemma 4.1 would yield the required block triangular form. The bidiagonal form for our example is

and its permuted form is block upper triangular

Equivalently, a complete block diagonal decoupling is also obtained if we update the unit upper triangular matrices and to block diagonalize the upper-triangular matrices and in the bidiagonal form of Lemma 4.1.

5 Recurrences for the null space and root polynomials

The discussion in Section 4 shows that we can obtain a triangular block decomposition of the form

(10)

where only has a right null space structure, only has a Jordan structure at , and has full column rank and contains the rest of the pencil structure. Moreover, and are both in a bidiagonal staircase form given in (6) and (8), and the equivalence transformation pair was obtained as the product of a unitary equivalence transformation pair , an upper triangular equivalence transformation pair , and a permutation transformation pair . This decomposition is crucial in the analysis of the numerical stability of each step of the method since the unitary similarity was analyzed in [23], and the upper triangular similarity pair can be viewed as a backsubstitution step for solving a linear system, which can be stabilized using iterative refinement [13]. However, a detailed analysis of the composed algorithm is beyond the scope of the present paper.

5.1 Calculating root polynomials: reduction to the regular case

We now argue that the structured pencils (10) allows us to just compute the root polynomials of the pencil , which is guaranteed to be regular and only have Jordan blocks with eigenvalue . We start by defining what it means for a polynomial matrix to be right invertible.

Definition 5.1

We say that is right invertible over if there exists a polynomial such that . If , by convention the empty matrix polynomial is right invertible and its right inverse is its transpose.

Linear equations whose cofficients are right invertible polynomial matrices always have polynomial solutions in the sense that if is right invertible with right inverse then the -linear map associated with is surjective and thus the equation , , has at least one polynomial solution. Indeed, a solution can be constructed as . Lemma 5.2 characterizes completely the set of right invertible polynomial matrices in terms of their eigenvalues and minimal indices. We note that almost equivalent results had appeared in [2]; however, Lemma 5.2 below provides a slightly better bound on the degree of a right inverse, and for this reason we will include a proof.

Lemma 5.2

Let be a matrix pencil. Then, is right invertible over if and only if it has no finite eigenvalues and no left minimal indices. Moreover, in that case, there exists a right inverse having degree .

Proof. First, we point out that

  1. for any pair of square invertible matrices it holds that is right invertible with right inverse if and only if is right invertible with right inverse ;

  2. are both right invertible with right inverses resp. if and only if is right invertible with right inverse ;

  3. The degree of a direct sum of polynomial matrices is the maximum of their degrees.

Hence, we can assume without loss of generality that is a single block within a Kronecker canonical form.

Assume first that is either a left Kronecker block or a Jordan block associated with a finite eigenvalue. Then there exists a nonzero vector and a scalar such that . Indeed, for a Kronecker block can be any element of and can be constructed by evaluating a row of its left minimal basis at ; while if is a Jordan block with finite eigenvalue then we take and equal to a left eigenvector. Assuming for a contradiction that is right invertible we then have for any right inverse

We now give a constructive proof of the converse implication, i.e., we explicitly compute a right inverse that also satisfies the degree bound. This time, we can assume that is either an Kronecker block or an Jordan block with infinite eigenvalues. We treat each case separately.

  • Right singular Kronecker block. If , the statement is true by definition. Otherwise,

    and it suffices to take the Toeplitz right inverse

  • Jordan block at infinity. In this case where is a nilpotent Jordan block and we can take the Toeplitz right inverse

    

From now on we will assume that the pencil is in the form

(11)

where is right invertible over , has no eigenvalues at and has no right minimal indices, and is regular and has only eigenvalues at . Note also that what we had achieved in (10) is a special case of (11) with .

Proposition 5.3

Suppose that is as in the equation (11) and satisfies the assumptions stated immediately below it. Then, the columns of are a minimal basis for if and only if a minimal basis for is given by the columns of

Proof. Since and have both trivial right nullspaces, it is clear that any minimal basis for can only have nonzero top blocks. After this observation, the proof becomes trivial.     

Proposition 5.4

Suppose that is as in the equation (11) and satisfies the assumptions stated immediately below it. Then, is a root polynomial of order for at