Strongly minimal self-conjugate linearizations for polynomial and rational matrices

We prove that we can always construct strongly minimal linearizations of an arbitrary rational matrix from its Laurent expansion around the point at infinity, which happens to be the case for polynomial matrices expressed in the monomial basis. If the rational matrix has a particular self-conjugate structure we show how to construct strongly minimal linearizations that preserve it. The structures that are considered are the Hermitian and skew-Hermitian rational matrices with respect to the real line, and the para-Hermitian and para-skew-Hermitian matrices with respect to the imaginary axis. We pay special attention to the construction of strongly minimal linearizations for the particular case of structured polynomial matrices. The proposed constructions lead to efficient numerical algorithms for constructing strongly minimal linearizations. The fact that they are valid for any rational matrix is an improvement on any other previous approach for constructing other classes of structure preserving linearizations, which are not valid for any structured rational or polynomial matrix. The use of the recent concept of strongly minimal linearization is the key for getting such generality.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/02/2020

Structured strong linearizations of structured rational matrices

Structured rational matrices such as symmetric, skew-symmetric, Hamilton...
03/05/2020

Linearizations of rational matrices from general representations

We construct a new family of linearizations of rational matrices R(λ) wr...
04/07/2022

Strong Admissibility, a Tractable Algorithmic Approach (proofs)

Much like admissibility is the key concept underlying preferred semantic...
07/25/2019

Local Linearizations of Rational Matrices with Application to Rational Approximations of Nonlinear Eigenvalue Problems

This paper presents a definition for local linearizations of rational ma...
03/30/2021

Structural backward stability in rational eigenvalue problems solved via block Kronecker linearizations

We study the backward stability of running a backward stable eigenstruct...
09/08/2021

Matrix functions via linear systems built from continued fractions

A widely used approach to compute the action f(A)v of a matrix function ...
03/12/2021

An efficient, memory-saving approach for the Loewner framework

The Loewner framework is one of the most successful data-driven model or...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the seventies, Rosenbrock [45] introduced the concept of a polynomial system matrix of an arbitrary rational matrix . Such a system matrix is partitioned in a quadruple of compatible polynomial matrices

(1)

such that its Schur complement with respect to equals . That is, . Then the quadruple is said to be a realization of . Rosenbrock showed that one can retrieve from the polynomial matrices and , respectively, the finite pole and zero structure of , provided is irreducible or minimal, meaning that the matrices

(2)

have, respectively, full row and column rank for all finite . It was shown recently in [17] that when the quadruple consists of polynomial matrices of degree at most one, i.e. pencils, then one can recover the complete eigenstructure of the rational matrix, namely its finite and infinite polar and zero structure, and its left and right null space structure from the pencils and provided the pencils in (2) have full rank for all , infinity included. Moreover, in this situation, the eigenvectors and minimal bases of can be very easily recovered from those of , and their minimal indices are the same. In such a case, is said to be strongly minimal [16, 17] or, also, a strongly minimal linearization of . The main advantage of using pencils is that there are well-established stable algorithms to compute their eigenstructure using unitary transformations only, both in the regular [42] and in the singular [48] case. There are also algorithms available to derive strongly minimal linear polynomial system matrices, from non-minimal ones. These algorithms are also based on unitary transformations only [49, 17].

In this paper, we show how to construct strongly minimal linearizations for rational matrices starting from a Laurent expansion around the point at infinity :

(3)

which is convergent for sufficiently large . The approach we propose is also valid if, instead of considering the Laurent expansion for the strictly proper part of , any minimal state-space realization of the strictly proper part of is given.

If the rational matrix is square (i.e. ) and has a particular type of self-conjugate structure the coefficients of its expansion also inherit the self-conjugate structure and the poles and zeros of appear in self-conjugate pairs. Such structures arise in many applications, as we comment below, and in these cases we also show how to construct strongly minimal linearizations preserving the structure. In particular, we consider here four types of self-conjugate rational matrices, two with respect to the real line and two with respect to the imaginary axis. The Hermitian and skew-Hermitian rational matrices , with respect to the real line, satisfy

respectively. They have poles and zeros that are mirror images with respect to the real line , and have coefficient matrices that are Hermitian (i.e. ) and skew-Hermitian (i.e. ), respectively. The para-Hermitian and para-skew-Hermitian rational matrices, with respect to the imaginary axis, satisfy

respectively. They have poles and zeros that are mirror images with respect to the imaginary line , and have scaled coefficient matrices that are Hermitian and skew-Hermitian, respectively. The nomenclature introduced above is used in the linear systems and control theory literature (see, for instance, [23, 43, 44] and the references therein). However, in standard references on structured polynomial matrices [36, 37] the para-Hermitian and para-skew-Hermitian structures are called alternating structures, because the matrix coefficients satisfy, respectively, and and, thus, alternate between being Hermitian or skew-Hermitian matrices. Specifically, para-Hermitian polynomial matrices are called -even and para-skew-Hermitian polynomial matrices are called

-odd in

[36, 37].

There are of course equivalent definitions for real rational matrices, where all coefficient matrices are real. Namely, (skew-)symmetric and para-(skew-)symmetric rational matrices. In these cases, the poles and zeros satisfy the same symmetries that have been described above.

The symmetries in the zeros and poles of structured polynomial and rational matrices reflect specific physical properties, as they originate usually from the physical symmetries of the underlying applications [23, 30, 33, 36, 38]. Such special structures occur in numerous applications in engineering, mechanics, control, and linear systems theory. Some of the most common algebraic structures that appear in applications are the (skew-)symmetric (or Hermitian), and the para-(skew-)symmetric (or Hermitian) or alternating structures considered in this work (see [30, 36, 43, 44] and the references therein). For instance, symmetric (or Hermitian) matrix polynomials arise in the classical problem of vibration analysis [24, 34], and alternating matrix polynomials find applications in the study of corner singularities in anisotropic elastic materials [41] and in the study of gyroscopic systems [33]. Rational matrices with the structures mentioned above have appeared, for instance, in the continuous-time linear-quadratic optimal control problem and in the spectral factorization problem [23, 43, 44, 50].

Because of the numerous applications where structured rational and polynomial matrices occur, there have been many attempts to construct linearizations for such structured rational and polynomial matrices that display the same structure as that of the rational or polynomial matrix (see [5, 10, 11, 15, 20, 23, 30, 34, 36]

among many other references on this topic). An important motivation for this search is to preserve numerically in floating point arithmetic the symmetries of the zeros and poles of these structured problems by applying structured algorithms for structured generalized eigenvalue problems to these structured linearizations

[7, 32, 39, 40, 41, 46]. However, all these earlier attempts to construct structured linearizations find obstacles when they are applied to the structured problems considered in this paper, because they either cover only a subclass of the structures, or they impose certain conditions on the rational and polynomial matrices for their construction to apply, such as regularity, strict properness or invertibility of certain matrix coefficients. We emphasize that, for some polynomial matrices, the mentioned obstacles cannot be overcome in any way with the previously adopted definitions of linearization, because it has been proved in [36, 37] that there exist alternating polynomial matrices which cannot be linearized at all according to the standard definitions of linearizations in [36, 37]. In contrast, in the present paper, we give a construction of structured strongly minimal linearizations valid for arbitrary rational and polynomial matrices, with any of the above four structures. Moreover, the proof used for this construction is different from these earlier papers, and we claim it to be simpler as well.

The paper is organized as follows. In Section 2, we develop background material for the problem and introduce strongly minimal linearizations for polynomial and rational matrices. In Section 3, we show how to construct strongly minimal linearizations of arbitrary polynomial matrices, paying particular attention to quadratic polynomial matrices in Subsection 3.1. In Section 4, we extend this construction to structured strongly minimal linearizations of structured polynomial matrices. In Sections 5 and 6, we develop analogous results for strictly proper rational matrices. That is, we build strongly minimal linearizations for arbitrary and structured strictly proper rational matrices, respectively. In Section 7, we combine the results in previous sections to construct strongly minimal linearizations for arbitrary and structured rational matrices. Finally, in Section 8, we comment some algorithmic aspects and, in Section 9, we give some concluding remarks and some lines of possible future research.

2 Background and strongly minimal linearizations

This section recalls basic definitions that are used throughout the paper and discusses the recent concept of strongly minimal linearizations of rational matrices [16, 17], which is fundamental in this work. We refer to [31, 45] for more details.

We consider the field of complex numbers . Then and denote the sets of matrices whose entries are in the ring of polynomials and in the field of rational functions , respectively. Their elements are called polynomial and rational matrices.

A rational function is said to be proper if and strictly proper if , where stands for degree. A (strictly) proper rational matrix is a matrix whose entries are (strictly) proper rational functions. By the division algorithm for polynomials, any rational function can be uniquely written as where is a polynomial and a strictly proper rational function. Therefore, any rational matrix can be uniquely written as

(4)

where is a polynomial matrix and is a strictly proper rational matrix. Then, is called the polynomial part of and the strictly proper part of .

A rational matrix is regular if it is square and its determinant is not identically equal to . Otherwise, is said to be singular. A square rational matrix is regular at a point if is invertible, with is regular at infinity or biproper if is regular at If is regular for all then is said to be unimodular and, equivalently, it is a polynomial matrix with constant nonzero determinant. The normal rank of a rational matrix is the size of its largest nonidentically zero minor.

Poles and zeros of rational matrices are defined via the local Smith–McMillan form [52, 3]. Let be a rational matrix of normal rank and let . Then there exist rational matrices and regular at such that

(5)

where are integer numbers. The diagonal matrix in (5) is unique and is called the local Smith-McMillan form of at . The exponents are called the structural indices of at . If there are strictly positive indices in (5) and they are , then is a zero of with partial multiplicities . In this case, we also say that is the zero structure of at . If there are strictly negative indices in (5) and they are , then is a pole of with partial multiplicities . In this case, we also say that is the pole (or polar) structure of at . If , then the factor is replaced by in (5), the matrices and are biproper, and the structural indices, zeros, poles, as well as their partial multiplicities, of at infinity are defined analogously. Observe that the structural indices and the pole and the zero structures of at infinity are exactly those of at zero.

The zero structure of a rational matrix is comprised by the set of its zeros (finite and infinite) and their partial multiplicities. The sum of the partial multiplicities of all the zeros (finite and infinite) of is called the zero degree of . The pole (or polar) structure of a rational matrix is comprised by the set of its poles (finite and infinite) and their partial multiplicities. The sum of the partial multiplicities of all the poles (finite and infinite) of is called the polar degree of , or, also, the McMillan degree of [31].

Remark 1

Polynomial matrices are particular cases of rational matrices. Therefore, the definitions above can be applied to polynomial matrices. However, standard literature on polynomial matrices [22, 25] use the term eigenvalues instead of zeros and poles and define the structure at infinity in a different way. We discuss these points in this remark. Note first that a polynomial matrix does not have finite poles, i.e., all the indices in (5) are nonnegative for any finite . The finite eigenvalues of and their partial multiplicities [25] are exactly the same as the finite zeros of and their partial multiplicities. However, in [25], a polynomial matrix of degree and normal rank is said to have an eigenvalue at infinity with partial multiplicities if the reversal polynomial matrix has an eigenvalue at with partial multiplicities . In this situation the structural indices (5) of at infinity when viewed as a rational matrix are

(6)

Thus, the pole-zero structures of a polynomial matrix at infinity are different from its “eigenvalue structure” at infinity defined through the reversal, but they are easily related through (6) and are completely equivalent to each other. From now on, we will make a clear distinction for any polynomial matrix of degree : whenever we talk about its “eigenvalue structure at infinity”, we refer to the zero structure of at , and whenever we talk about its “pole or zero structures at infinity”, we refer to the pole or zero structures of at . Recall that such a distinction is not necessary at finite points. Moreover, we emphasize that a polynomial matrix of degree may or may not have an eigenvalue at infinity, may or may not have a zero at infinity, but always has a pole at infinity with largest partial multiplicity (or order) . More on this topic can be found in [4]. Finally, note that for pencils, i.e., polynomial matrices with degree , the definition of “eigenvalue structure at infinity” via reversals is equivalent to that coming from the Kronecker canonical form [22], and that the relation (6) was pointed out in [49].

In addition to the pole and zero structures, a singular rational matrix has a singular structure or minimal indices. In order to define them, recall that every rational vector subspace

, i.e., every subspace over the field , has bases consisting entirely of polynomial vectors. We call them polynomial bases. By Forney [21], a minimal basis of is a polynomial basis of consisting of polynomial vectors whose sum of degrees is minimal among all polynomial bases of . Though minimal bases are not unique, the ordered list of degrees of the polynomial vectors in any minimal basis of is unique. These degrees are called the minimal indices of .

We now consider a rational matrix and the rational vector subspaces:

which are called the right and left null-spaces of , respectively. If is singular, then at least one of these null-spaces is non-trivial. If (resp. ) is non-trivial, it has minimal bases and minimal indices, which are called the right (resp. left) minimal bases and minimal indices of . Notice that an rational matrix of normal rank has left minimal indices and right minimal indices.

The complete list of structural data of a rational matrix is formed by its zero structure, its pole structure, and its left and right minimal indices.

The following degree sum theorem [53] relates the structural data of a rational matrix . In particular, it relates the McMillan degree and the zero degree of to the left null space degree of that is the sum of all left minimal indices, and to the right null space degree of that is the sum of all right minimal indices.

Theorem 1

Let be a rational matrix. Then

2.1 Strongly minimal linearizations and their relation with other classes of linearizations

Linearizing rational matrices is one of the most competitive methods for computing their complete lists of structural data. This means constructing a matrix pencil such that the complete list of structural data of the corresponding rational matrix can be recovered from the structural data of the pencil. In this paper, we focus on the strongly minimal linearizations introduced in Definition 2. For the purpose of comparing our results with others available in the literature, we also revise very briefly other notions of linearizations.

Since the results in this paper are relevant also when they are applied to polynomial matrices, we start with a very popular notion of linearization of a polynomial matrix. A pencil of degree is a linearization in the sense of Gohberg, Lancaster and Rodman [25], or in the GLR-sense for short, of a polynomial matrix of degree , if there exist unimodular matrices and such that

where

denotes the identity matrix of size any integer

. The key property of a GLR-linearization is that it has the same finite eigenvalues with the same partial multiplicities as . Furthermore, is a strong linearization of in the GLR-sense if is a GLR-linearization of and is a GLR-linearization of . Then, a GLR-strong linearization has the same finite and infinite eigenvalues with the same partial multiplicities as . However, the minimal indices of a GLR (strong) linearization may be completely unrelated to those of [13, Section 4], except for the fact that the number of left (resp. right) minimal indices of and are equal. Nevertheless, the GLR-strong linearizations that are used in practice have minimal indices that are simply related to those of the polynomial through addition of a constant shift (see [14] and the references therein).

In order to linearize a rational matrix , we consider in this paper linear polynomial system matrices of [45]. This means that we consider block partitioned pencils

(7)

where is regular and the Schur complement of in is the rational matrix , i.e., . In this situation, it is also said that is the transfer function matrix of These pencils are particular instances of Rosenbrock’s polynomial system matrices [45], which may have any degree.

A linear polynomial system matrix as in (7) contains the finite zero and pole structures of its transfer function matrix provided that satisfies the following minimality conditions. is minimal if the matrices

(8)

have, respectively, full row and column rank for all . This is equivalent to state that the pencils in (8) do not have finite eigenvalues. Then we have the following result.

Theorem 2

[45] Let be the transfer function matrix of in (7). Let . If is minimal then

  1. the zero structure of at is the same as the zero structure of at , and

  2. the pole structure of at is the same as the zero structure of at .

It is very easy to prove that the number of left (resp. right) minimal indices of a minimal polynomial system matrix is equal to the number of left (resp. right) minimal indices of its transfer function matrix, though their values may be different [53, 2].

Remark 2

We can combine Theorem 2 applied to a polynomial matrix and the equality of the number of the minimal indices of and with [13, Theorem 4.1] for proving that any minimal linear polynomial system matrix of a polynomial matrix is always a GLR-linearization of . The reverse result is not true in general. Observe also that any minimal polynomial system matrix of a polynomial matrix must have the block in (7) unimodular, because does not have finite poles.

The minimal linear polynomial system matrices of an arbitrary rational matrix are particular cases of the linearizations of defined in [1, Definition 3.2], which were introduced with the idea of combining the concept of minimal polynomial system matrix with the extension of GLR-linearizations from polynomial to rational matrices.

Notice that Theorem 2 does not provide information about the structure at infinity. The recovering of this structure requires the following concept: in (7) is minimal at infinity [16] if the matrices

(9)

have, respectively, full row and column rank. This condition is equivalent to state that the pencils in (8) have degree exactly and do not have eigenvalues at . Then we have the next result that follows from [54] and [17, Section 3].

Theorem 3

Let be the transfer function matrix of in (7). If is minimal at then

  1. the zero structure of at infinity is the same as the zero structure of at infinity, and

  2. the polar structure of at infinity is the same as the zero structure of the pencil

    (10)

    at infinity.

The polar structure of at can also be recovered without considering the extended pencil in (10). In particular, both the zero and polar structures of at infinity can be obtained from the eigenvalue structures of the pencils and at infinity as Theorem 4 shows. We emphasize that the hypothesis of minimality at used in Theorem 4 implies that has degree . However, might have degree if . In any case, we understand that .

Theorem 4

[16, Theorem 3.13] Let be the transfer function matrix of in (7). Assume that has normal rank . Let be the partial multiplicities of at and let be the partial multiplicities of at If is minimal at then the structural indices at infinity of are

A linear polynomial system matrix that is minimal (at finite points) and also minimal at is called strongly minimal [16, 17]. Related to this concept we present the following definitions, which have been introduced in [17, Section 3] for polynomial system matrices of any degree.

Definition 1

A linear polynomial system matrix as in (7) is said to be strongly E-controllable and strongly E-observable, respectively, if the pencils

(11)

have degree exactly and have no finite or infinite eigenvalues. If both conditions are satisfied is said to be strongly minimal.

The letter E in the definition of strong E-controllability and E-observability refers to the condition of the matrices in (11) not having eigenvalues, finite or infinite, and emphasizes the differences with the concepts of “strong controlability, observability and irreducibility” used in [53, 54, 17]. As mentioned before, the degree pencils in (11) do not have infinite eigenvalues if and only if the matrices in (9) have full row and full column rank, respectively. The ranks of the matrices in (9) will be also called the ranks at infinity of the pencils in (11), even in the case the matrices in (9) do not have full ranks.

Next, we introduce formally the definition of strongly minimal linearization of a rational matrix, which is fundamental in this work. This definition is implicit in [17].

Definition 2

Let be a rational matrix. A linear polynomial system matrix as in (7) is said to be a strongly minimal linearization of if is strongly minimal and its transfer function matrix is . Equivalently, is said to be a strongly minimal linear realization of .

Strongly minimal linearizations of a rational matrix have been defined with the goal of constructing pencils that allow us to recover the complete pole and zero structures of through Theorems 2 and 4, or 3. Surprisingly, the condition of strong minimality implies that the minimal indices of and are the same. This is proved in Theorem 5, which, together with Theorems 2 and 4, allows us to recover the complete list of structural data of a rational matrix from any of its strongly minimal linearizations.

Theorem 5

Let be a strongly minimal linearization of a rational matrix . Then the left and right minimal indices of are the same as the left and right minimal indices of .

Proof

By [17, Proposition 1], a strongly minimal linear polynomial system matrix is strongly irreducible according to the definition in [54]. Then, by [54, Result 2], the left and right minimal indices of and are the same.

As we have seen in the proof of Theorem 5, any strongly minimal linearization of a rational matrix is a strongly irreducible polynomial system matrix of (see definition in [54]). Thus, [54, Result 2] establishes a simple bijection between the left (resp. right) minimal bases of and those of that allows us to recover a left (resp. right) minimal basis of from any left (resp. right) minimal basis of , and conversely, without any computational cost. We only state here the result for right minimal bases since for left minimal bases is analogous.

Theorem 6

Let as in (7) be a strongly minimal linearization of a rational matrix . If the columns of , partitioned conformably to the blocks of , form a right minimal basis for then the columns of form a right minimal basis for . Conversely, if the columns of form a right minimal basis for then the columns of form a right minimal basis for .

Remark 3

Given with , it is easy to prove that the same recovery rules of Theorem 6 hold for the bases of the left (resp. right) null space of the constant matrix from those of the left (resp. right) null space of the constant matrix for any linear polynomial system matrix as in (7), without imposing strong minimality (see [15, Section 5.1]). In the case of regular rational matrices, is an eigenvalue of when it is a zero but not a pole, and the finite poles of are the finite zeros of if is minimal. Then, by assuming minimality on , the previous rule allows us to recover the associated eigenvectors of from those of .

Remark 4

It follows from Theorems 2, 3 and 5 that, if is a strongly minimal linearization of a rational matrix , then

and then from Theorem 1 that But the only pole of is the point at infinity and its polar degree is equal to [49, p. 126]. Therefore, the McMillan degree of equals the rank of for any strongly minimal linearization of and no other pencils with the same zero structure and the same left and right minimal indices as can have a first order coefficient with smaller rank. Thus, strongly minimal linearizations are optimal in this sense.

By Remark 2, we have that strongly minimal linearizations of a polynomial matrix are always GLR-linearizations of . However, the following example shows that they are not, in general, GLR-strong linearizations.

Example

(Strongly minimal linearizations of polynomial matrices are not strong linearizations in the sense of Gohberg, Lancaster and Rodman) Consider the polynomial matrix

and the partitioned pencil

The transfer function matrix of is and is minimal and minimal at infinity. Therefore, is a strongly minimal linearization of and also a GLR-linearization of . However, is not unimodularly equivalent to and, thus, is not a GLR-strong linearization of . In order to see this, observe that

which makes it transparent that does not have eigenvalues (or zeros) at zero, while does. In general, it is possible to prove by using Theorem 4 that strongly minimal linearizations of polynomial matrices of degree larger than with eigenvalues at infinity are not GLR-strong linearizations.

Despite of the fact of not being GLR-strong linearizations, strongly minimal linearizations of a polynomial matrix allow us to recover always the complete list of structural data of , including its minimal indices. Moreover, we will prove in this paper that they allow us to preserve structures of polynomial matrices that cannot always be preserved by GLR-strong linearizations.

Finally, note that according to the definitions in [16], we can also say that a strongly minimal linearization of a rational matrix is a linearization of its transfer function matrix at all finite points and also at infinity. However, strongly minimal linearizations are not always strong linearizations in the sense of [1, Definition 3.4] since the first degree coefficients of their -blocks are not necessarily invertible.

3 Constructing strongly minimal linearizations of polynomial matrices

In this section we focus on constructing explicitly a strongly minimal linearization for any given polynomial matrix of degree  :

(12)

Such a strongly minimal linearization is constructed in Theorem 7 and we will prove in Section 4 that it inherits the structure of , when possesses any of the self-conjugate structures considered in this work. The construction uses three pencils associated with that have appeared before in the literature. They are described in the following paragraphs.

The pencil

(13)

was used in the classical reference [51]. It is easy to see that is a linear polynomial system matrix of , since , and that it is minimal for all finite . For the point at , E-controllability is clearly satisfied but E-observability is only satisfied if the matrix has full column rank . Thus, is not a strongly minimal linearization of when does not have full column rank. However, note that is always a GLR-strong linearization of . This can be seen, for instance, by noting that if the two block rows in (13) are interchanged, we obtain one of the block Kronecker linearizations (with only one block column) associated to defined in [14, Section 4]. The pencil (13) has a structure similar to that of the classical first or row Frobenius companion form.

The pencil

(14)

is in some sense “dual” to (13). It is also a linear polynomial system matrix of , since . Moreover, is strongly E-observable, but not necessarily strongly E-controllable, unless has full row rank. As a consequence, is a strongly minimal linearization of if and only if has full row rank. However, is always a GLR-strong linearization of . The pencil (14) has a structure similar to that of the classical second or column Frobenius companion form.

The pencil

(15)

was originally proposed by Lancaster in [34, pp. 58-59] for regular polynomial matrices with invertible. In this paper, we use it for arbitrary polynomial matrices, including rectangular ones. has the advantage to preserve the Hermitian or skew-Hermitian nature of the coefficients of the linearization, if happens to have coefficients with such properties. The pencil has been also studied more recently in [30, 35], where it is seen as one of the pencils of the standard basis of the linear space of pencils related to . It is well known that is a GLR-strong linearization of if and only if is invertible [12, 35]. In fact, in this case, is also a strongly minimal linearization of since it is strongly minimal and . However, if is not invertible, is not a linearization of in any of the senses considered in the literature and, even more, it is not a Rosenbrock polynomial system matrix of since is not regular. Despite of this fact, is our starting point for constructing the strongly minimal linearization of of interest in this work.

The constant block Hankel matrix defined in the next equation

(16)

plays a key role in the rest of the paper. To begin with, it allows us to obtain the following relations

(17)

between submatrices of the pencils , and . The matrix is invertible if and only if is square and invertible. Otherwise, is singular and this is the case that requires a careful analysis.

In [51], it was shown how to derive from the linear polynomial system matrix of , a smaller linear polynomial system matrix that is both strongly E-controllable and E-observable, and hence strongly minimal, by using only multiplications by constant unitary matrices. This was obtained by deflating the unobservable infinite eigenvalues from the pencil . Moreover, the obtained pencil allows us to recover the complete list of structural data of . The reduction procedure in [51] has been recently extended to arbitrary linear polynomial system matrices of arbitrary rational matrices in [17], where it is proved that the obtained strongly minimal linear polynomial system matrix has as transfer function matrix , where and are constant invertible matrices. We emphasize that the procedures in [51, 17] lead to stable and efficient numerical algorithms since both are based on unitary transformations.

We show in Theorem 7 that a procedure similar to that in [51] can be applied to in order to derive a strongly minimal linear polynomial system matrix of , despite of the fact that if is not square or invertible, then is not a Rosenbrock polynomial system matrix since is then not regular. Moreover, we remark that the procedure in Theorem 7 is much simpler than those in [51, 17] and that, as said before, it yields a polynomial system matrix whose transfer function matrix is precisely . Before stating and proving Theorem 7, we prove the simple auxiliary Lemma 1 and introduce some other auxiliary concepts.

A rational matrix (with ) is said to be a rational basis if its rows form a basis of the rational subspace they span, i.e., if it has full row normal rank. Two rational bases and are said to be dual if , and .

Lemma 1

Let

be a polynomial system matrix, where is assumed to be regular. Let be a rational basis of the form dual to i.e., such that then is the transfer function of .

Proof

The equation

implies and, since is regular, Thus .

Theorem 7

Let be a polynomial matrix as in (12). Let be the block Hankel matrix in (16) and . Let and be unitary matrices that “compress” the matrix as follows :

(18)

where is of dimension and invertible. Then, if is the matrix pencil in (15), the pencil is equal to the “compressed” pencil

(19)

and

(20)

is a strongly minimal linearization of , where is regular. In particular, .

Proof

It follows from (17) and the strong E-controllability of that has rank for all , infinity included, and that its left null space is spanned by the rows of . Likewise, it follows from (17) and the strong E-observability of that has rank for all , infinity included and that its right null space is spanned by the columns of . This proves the compressed form (19).

We then prove that the matrix pencil is regular. This follows from the identity