The landmark systems realization theory of Mayo and Antoulas MA07 constructs a dynamical system that interpolates tangential frequency domain measurements of a multi-input, multi-output system. Central to this development is the matrix pencil composed of Loewner and shifted Loewner matrices and that encode the interpolation data. When this technique is used for exact system recovery (as opposed to data-driven model reduction), the eigenvalues of this pencil match the poles of the original dynamical system. However other spectral properties, including the sensitivity of the eigenvalues to perturbation, can differ significantly; indeed, they depend upon the location of the interpolation points in the complex plane relative to the system poles, and how the poles are partitioned. Since in many scenarios one uses to learn about the original system, such subtle differences matter.
Pseudospectra are sets in the complex plane that contain the eigenvalues but provide additional insight about the sensitivity of those eigenvalues to perturbation and the transient behavior of the underlying dynamical system. While most often used to analyze single matrices, pseudospectral concepts have been extended to matrix pencils (generalized eigenvalue problems).
This introductory note shows several ways to use pseudospectra to investigate spectral questions involving Loewner pencils derived from system realization problems. Using simple examples, we explore the following questions.
How do the locations of the interpolation points and their partition into “left” and “right” points affect the sensitivity of the eigenvalues of ?
Do solutions to the dynamical system mimic solutions to the original system , especially in the transient regime? Does this agreement depend on the interpolation points?
How do noisy measurements affect the eigenvalues of ?
We include an algorithm for computing pseudospectra of an -dimensional Loewner pencil in operations, improving the cost for generic matrix pencils; the appendix gives a MATLAB implementation.
Throughout this note, we use to denote the spectrum (eigenvalues) of a matrix or matrix pencil, and
to denote the vector 2-norm and the matrix norm it induces. (All definitions here can readily be adapted to other norms, as needed. The algorithm, however, is designed for use with the 2-norm.)
2 Loewner realization theory in a nutshell
for , , and , with which we associate, via the Laplace transform, the transfer function .
Given tangential measurements of we seek to build a
realization of the system that interpolates the given data.
More precisely, consider
the right interpolation data
distinct interpolation points ;
interpolation directions ;
function values ;
and left interpolation data
distinct interpolation points ;
interpolation directions ;
function values .
Assume the left and right interpolation points are disjoint ; indeed in our examples we will consider all the left and right points to be distinct.
The interpolation problem seeks matrices for which the transfer function interpolates the data: for and ,
Two structured matrices play a crucial role in the development of Mayo and Antoulas MA07 . From the data, construct the Loewner and shifted Loewner matrices
i.e., the entries of these matrices have the form
Now collect the data into matrices. The right interpolation points, directions, and data are stored in
while the left interpolation points, directions, and data are stored in
2.1 Selecting and arranging interpolation points
As Mayo and Antoulas observe, Sylvester equations connect these matrices:
Just using the dimensions of the components, note that
Thus for modest and , the Sylvester equations (2) must have low-rank right-hand sides.111For single-input, single-output (SISO) systems, , so the rank of the right-hand sides of the Sylvester equations cannot exceed two. The same will apply for multi-input, multi-output systems with identical left and right interpolation directions: for all and for all .
This situation often implies the rapid decay of singular values of solutions to the Sylvester equationASZ02 ; BT17 ; Pen00a ; Pen00b ; Sab06 . While this phenomenon is convenient in the context of balanced truncation model reduction (enabling low-rank approximations to the controllability and observability Gramians), it is less welcome in the Loewner realization setting, where the rank of should reveal the order of the original system: fast decay of the singular values of makes this rank ambiguous. Since and are diagonal, they are normal matrices, and hence Theorem 2.1 of BT17 gives
where denotes the th largest singular value, , and denotes the set of irreducible rational functions whose numerators and denominators are polynomials of degree or less.222The same bound holds for with . The right hand side of (3) will be small when there exists some for which all are small, while all are large: a good separation of from is thus sufficient to ensure the rapid decay of the singular values of and . (Beckermann and Townsend give an explicit bound for the singular values of Loewner matrices when the interpolation points fall in disjoint real intervals (BT17, , Cor. 4.2).)
In our setting, where we want the singular values of and to reveal the system’s order (without artificial decay of singular values as an accident of the arrangement of interpolation points), it is necessary for one to be at least the same size as the smallest value of for all . Roughly speaking, we want the left and right interpolation points to be close together (even interleaved). While this arrangement is necessary for slow decay of the singular values, it does not alone prevent such decay, as evident in Figure 4 (since (3) is only an upper bound).
Another heuristic, based on the Cauchy-like structure ofand , also suggests the left and right interpolation points should be close together. Namely, and are a more general form of the Cauchy matrix , whose determinant has the elegant formula (e.g., (HJ13, , p. 38))
It is an open question if and have similarly elegant formulas. Nevertheless, and do have the same denominator as (which can be checked by recursively subtracting the first row from all other rows when computing the determinant). This observation suggests that to avoid artificially small determinants for and (which, up to sign, are the products of the singular values) it is necessary for the denominator of (4) to be small, and, thus, for the left and right interpolation points to be close together.
In practice, we often start with initial interpolation points that we want to partition into left and right interpolation points to form and . Our analysis of (4) suggests a simple way to arrange the interpolation points such that the denominator of and is small: relabel the points to satisfy
The greedy reordering in (5) ensures that is small and allows us to simply interleave the left and right interpolation points. Moreover, when are located on a line, the reordering in (5) simplifies to directly interleaving and and, thus, it can be skipped. This ordering need not be optimal, as we do not visit all possible combinations of ; it simply seeks a partition that yields a large determinant (which must also depend on the interpolation data). We note its simplicity, effectiveness, and efficiency (requiring only operations).
2.2 Construction of interpolants
Throughout we make the fundamental assumptions that for all ,
and we presume the underlying dynamical system is controllable and observable.
When is the order of the system, Mayo and Antoulas show that the transfer function defined by
interpolates the data values.
When , fix some
and compute the (economy-sized) singular value decomposition
with , , and . Then with
defines an th order system that interpolates the data.
3 Pseudospectra for matrix pencils
Though introduced decades earlier, in the 1990s pseudospectra emerged as a popular tool for analyzing the behavior of dynamical systems (see, e.g., TTRD93 ), eigenvalue perturbations (see, e.g., CF96 ), and stability of uncertain linear time-invariant (LTI) systems (see, e.g., HP90 ).
For a matrix and , the -pseudospectrum of is
For all , is a bounded, open subset of the complex plane that contains the eigenvalues of . Definition 1 motivates pseudospectra via eigenvalues of perturbed matrices. A numerical analyst studying accuracy of a backward stable eigenvalue algorithm might be concerned with on the order of , where denotes the machine epsilon for the floating point system Ove01 . An engineer or scientist might consider for much larger values, corresponding to uncertainty in parameters or data that contribute to the entries of .
Via the singular value decomposition, one can show that (8) is equivalent to
see, e.g., (TE05, , chap. 2). The presence of the resolvent in this definition suggests a connection to the transfer function for the system
Various extensions of pseudospectra have been proposed to handle more general eigenvalue problems and dynamical systems; see Emb14 for a concise survey. The first elaborations addressed the generalized eigenvalue problem (i.e., the matrix pencil ) FGNT96 ; Rie94 ; Ruh95 . Here we focus on the definition proposed by Frayssé, Gueury, Nicoud, and Toumazou FGNT96 , which is ideally suited to analyzing eigenvalues of nearby matrix pencils. To permit the perturbations to and to be scaled independently, this definition includes two additional parameters, and .
Let . For a pair of matrices and any , the --pseudospectrum of the matrix pencil is the set
Note that is an open, nonempty subset of the complex plane, but it need not be bounded.
If is a singular pencil ( for all ), then .
If is a regular pencil but is not invertible, then contains an infinite eigenvalue.
If is nonsingular but exceeds the distance of to singularity (the smallest singular value of ), then contains the point at infinity.
This formula suggests a way to compute : evaluate on a grid of points covering a relevant region of the complex plane (or the Riemann sphere) and use a contour plotting routine to draw boundaries of . The accuracy of the resulting pseudospectra depends on the density of the grid. Expedient algorithms for computing can be derived by computing a unitary simultaneous triangularization (generalized Schur form) of and in operations, then using inverse iteration or inverse Lanczos, as described by Trefethen Tre99a and Wright Wri02b , to compute at each grid point in operations. For the structured Loewner pencils of interest here, one can compute in operations without recourse to the preprocessing step, as proposed in Section 4.
Definition 2 can be extended to by only perturbing :
This definition may be more suitable for cases where is fixed and uncertainty in the system only emerges, e.g., through physical parameters that appear in .
Since we ultimately intend to study the pseudospectra of Loewner matrix pencils, one might question the use of generic perturbations in Definition 2. Should we restrict and to maintain Loewner structure, i.e., so that and maintain the coupled shifted Loewner–Loewner form in (1)? Such sets are called structured pseudospectra.
Three considerations motivate the study of generic perturbations : one practical, one speculative, and one philosophical. (a) Beyond repeatedly computing the eigenvalues of Loewner pencils with randomly perturbed data, no systematic way is known to compute the coupled Loewner structured pseudospectra, i.e., no analogue of the resolvent-like definition (10) is known. (b) Rump Rum06 showed that in many cases, preserving structure has little effect on the standard matrix pseudospectra. For example, the structured -pseudospectrum of a Hankel matrix allowing only complex Hankel perturbations exactly matches the unstructured -pseudospectrum based on generic complex perturbations (Rum06, , Thm. 4.3). Whether a similar results holds for Loewner structured pencils is an interesting open question. (c) If one seeks to analyze the behavior of dynamical systems (as opposed to eigenvalues of nearby matrices), then generic perturbations give much greater insight; see (TE05, , p. 456) for an example where real-valued perturbations do not move the eigenvalues much toward the imaginary axis (hence the real structured pseudospectra are benign), yet the stable system still exhibits strong transient growth.
As we shall see in Section LABEL:sec:fullrank, Definition 2 provides a helpful tool for investigating the sensitivity of eigenvalues of matrix pencils. A different generalization of Definition 1 gives insight into the transient behavior of solutions of . This approach is discussed in (TE05, , chap. 45), following Rie94 ; Ruh95 , and has been extended to handle singular in EK17 (for differential-algebraic equations and descriptor systems). Restricting our attention here to nonsingular , we analyze the conventional (single matrix) pseudospectra . From these sets one can develop various upper and lower bounds on and (TE05, , chap. 15). Here we shall just state one basic result. If for some (where denotes the real part of a complex number), then
This statement implies that there exists some unit-length initial condition such that , even though may be contained in the left half-plane. (Optimizing this bound over yields the Kreiss Matrix Theorem (TE05, , (15.9)).)
Pseudospectra of matrix pencils provide a natural vehicle to explore that stability of the matrix pencil associated with the Loewner realization in (6). We shall thus investigate eigenvalue perturbations via and transient behavior via .
4 Efficient computation of Loewner pseudospectra
We first present a novel technique for efficiently computing pseudospectra of large Loewner matrix pencils, , using the equivalent definition given in (10). When the Loewner matrix is nonsingular, we employ inverse iteration to exploit the structure of the Loewner pencil to compute (in the two-norm) using only operations. This avoids the need to compute an initial simultaneous unitary triangularization of and using the QZ algorithm, an operation.
Inverse iteration (and inverse Lanczos) for requires computing
Thus the resolvent can be expressed using only and not :
We now use the Sherman–Morrison–Woodbury formula (see, e.g., GV12 ) to get
where and . As a result, we can compute the inverse iteration vectors in (12) as
which requires solving several linear systems given by the same Loewner matrix , e.g., , .
Crucially, solving a linear system involving a Loewner matrix can be done efficiently in only operations because has displacement rank . More precisely, is a Cauchy-like matrix that satisfies the Sylvester equation (2) given by diagonal generator matrices and and a right-hand side of rank at most , i.e., . The displacement rank structure of can be exploited to compute its LU factorization in only flops (see GKO95b and (GV12, , sect. 12.1)). Given the LU factorization of , solving in (13) via standard forward and backward substitution requires another operations.
Next, multiplying with the solution of requires a total of
operations, namely (to leading order on the factorizations):
operations to compute ;
operations to compute ;
operations to solve via
an LU factorization followed by forward and backward substitutions;
operations to multiply with .
Finally, multiplying with in (13) requires an additional operations, bringing the total cost of computing (13) to operations.
In practice, the sizes of the right and left tangential directions are much smaller than the size of the Loewner pencil, i.e., . For example, for scalar data (associated with SISO systems), . Therefore, in practice, computing (13) can be done in only operations.
Partial pivoting can be included in the LU factorization of the Loewner matrix to overcome numerical difficulties. Adding partial pivoting maintains the operation count for the LU factorization of (see (GV12, , sect. 12.1)), and hence computing (13) can still be done in operations. The appendix gives a MATLAB implementation of this efficient inverse iteration.
We measure these performance gains for a Loewner pencil generated by sampling at points uniformly spaced in the interval . We compare our new Loewner pencil inverse iteration against a standard implementation (see (TE05, , p. 373)) applied to a simultaneous triangularization of and . (The simultaneous triangularization costs but is fast, as MATLAB’s qz routine invokes LAPACK code. For a fair comparison, we test against a C++ implementation of the fast Loewner code, compiled into a MATLAB .mex file.) Table 1 shows timings for both implementations by computing on a grid of points. As expected, exploiting the Loewner structure gives a significant performance improvement for large .
We next examine two simple examples involving full-rank realization of SISO systems, to illustrate the kinds of insights one can draw from pseudospectra of Loewner pencils.
5 Example 1: eigenvalue sensitivity and transient behavior
We first consider a simple controllable and observable SISO system with :
This is symmetric negative definite, with eigenvalues . Since the system is SISO, the transfer function maps to , and hence the choice of “interpolation directions” is trivial (though the division into “left” and “right” points matters). We take left and right interpolation points, with and . We will study various choices of interpolation points, all of which satisfy, for each ,
This basic set-up makes it easy to focus on the influence of the interpolation points , , , . We will use the pseudospectra to examine how the interpolation points affect the stability of the eigenvalues of the Loewner pencil.
Table 2 records four different choices of ; Figure 1 shows the corresponding pseudospectra . All four Loewner realizations match the eigenvalues of and satisfy the interpolation conditions. However, the pseudospectra show how the stability of the eigenvalues and differs across these four realizations. These eigenvalues become increasingly sensitive from example (a) to (d), as the interpolation points move farther from . Table 2 also shows the singular values of the Loewner matrix , demonstrating how the second singular value decreases as the eigenvalues become increasingly sensitive. (Taken to a greater extreme, it would eventually be difficult to determine if truly is rank 2.)
Contrast these results with the standard pseudospectra of itself, shown at the top of Figure 2. Since is real symmetric (hence normal), is the union of open -balls surrounding the eigenvalues. Figure 2 compares these pseudospectra to , which give insight about the transient behavior of solutions to , e.g., via the bound (11). The top plot shows , whose rightmost extent in the complex plane is always : no transient growth is possible for this system. However, in all four of the Loewner realizations, extends more than into the right-half plane for (orange level curve), indicating by (11) that transient growth must occur for some initial condition. Figure 3 shows this growth for all four realizations: the more remote interpolation points lead to Loewner realizations with greater transient growth.
6 Example 2: partitioning interpolation points and noisy data
To further investigate how the interpolation points influence eigenvalue stability, for the Loewner pencil, consider the SISO system of order 10 given by
Figure 4 shows for six configurations of the interpolation points. Plots (a) and (b) use the points ; in plot (a) the left and right points interlace, suggesting slower decay of the singular values of , as discussed in Section 2.1; in plot (b) the left and right points are separated, leading to faster decay of the singular values of and considerably larger pseudospectra. (The Beckermann–Townsend bound (BT17, , Cor. 4.2) would apply to this case.) Plot (c) further separates the left and right points, giving even larger pseudospectra. In plots (d) and (f), complex interpolation points are interleaved (while keeping conjugate pairs together) (d) and separated (f): the latter significantly enlarges the pseudospectra. Plot (f) uses the same relative arrangement that gave such nice results in plot (a) (the singular value bound (3) is the same for (a) and (e)), but their locations relative to the poles of the original system differ. The pseudospectra are now much larger, showing that a large upper bound in (3) is not alone enough to guarantee small pseudospectra. (Indeed, the pseudospectra are so large in (e) and (f) that the plots are dominated by numerical artifacts of computing .) Pseudospectra reveal the great influence interpolation point location and partition can have on the stability of the realized pencils.
Pseudospectra also give insight into the consequences of inexact measurement data. Consider the following experiment. Take the scenario in Figure 4(a), the most robust of these examples. Subject each right and left measurement and to random complex noise of magnitude , then build the Loewner pencil from this noisy data. How do the badly polluted measurements affect the computed eigenvalues? Figure 5 shows the results of 1,000 random trials, which can depart from the true matrices significantly:
Despite these large perturbations, the recovered eigenvalues are remarkably accurate: in of cases, the eigenvalues have absolute accuracy of at least , indeed more accurate than the measurements themselves. The pseudospectra in Figure 4(a) suggest good robustness (though the pseudospectral level curves are pessimistic by one or two orders of magnitude). Contrast this with the complex interleaved interpolation points used in Figure 4(d). Now we only perturb the data by a small amount, , for which the perturbed Loewner matrices (over 1,000 trials) satisfy
With this mere hint of noise, the eigenvalues of the recovered system erupt: only of the eigenvalues are correct to two digits. (Curiously, and are always computed correctly, while , , and are never computed correctly.) The pseudospectra indicate that the leftmost eigenvalues are more sensitive, and again hint at the effect of the perturbation (though off by roughly an order of magnitude in ).333One could inspect the level curve of for and . Measurements of real systems (or even numerical simulations of nontrivial systems) are unlikely to produce such high accuracy; pseudospectra can reveal the virtue or folly of a given interpolation point configuration.
In these simple experiments, pseudospectra have been most helpful for indicating the sensitivity of eigenvalues when the left and right interpolation points are favorably partitioned (e.g., interleaved). They seem to be less precise at predicting the sensitivity to noise of poor left/right partitions of the interpolation points. Figure 6 gives an example, based on the two partitions of the same interpolation points in Figure 4(d,f). The pseudospectra suggest that the eigenvalues for plot (f) should be much more sensitive to noise than those for the interleaved points in plot (d). In fact, the configuration in plot (f) appears to be only marginally less stable to noise of size , over 10,000 trials. This is a case where one could potentially glean additional insight from structured Loewner pseudospectra.
Pseudospectra provide a tool for analyzing the stability of eigenvalues of Loewner matrix pencils. Elementary examples show how pseudospectra can inform the selection and partition of interpolation points, and bound the eigenvalues of Loewner pencils in the presence of noisy data. Using a different approach to pseudospectra, we showed that while the realized Loewner pencil matches the poles of the original system, it need not replicate transient dynamics of ; pseudospectra can reveal potential transient growth, which varies with the interpolation points.
In this initial study we have intentionally used simple examples involving small, symmetric . Realistic examples, e.g., with complex poles, nonnormal , multiple inputs and outputs, and rank-deficient Loewner matrices, will add additional complexity. Moreover, we have only used the Mayo–Antoulas interpolation theory to realize a system whose order is known; we have not addressed pseudospectra of the reduced pencils (7) in the context of data-driven model reduction.
Structured Loewner pseudospectra provide another avenue for future study. Structured matrix pencil pseudospectra have not been much investigated, especially with Loewner structure. Rump’s results for standard pseudospectra Rum06 suggest the following problem; its positive resolution would imply that the Loewner pseudospectrum matches the structured Loewner matrix pseudospectrum.
Given any , Loewner matrix and associated shifted Loewner matrix , suppose . Does there exist some Loewner matrix and associated shifted Loewner matrix such that
Acknowledgements.This work was motivated by a question posed by Thanos Antoulas, who we thank not only for suggesting this investigation, but also for his many profound contributions to systems theory and his inspiring teaching and mentorship. We also thank Serkan Gugercin for helpful comments. (Mark Embree was supported by the U.S. National Science Foundation under grant DMS-1720257.)
We provide a MATLAB implementation that computes the inverse iteration vectors in (12) in only operations by exploiting the Cauchy-like rank displacement structure of the Loewner pencil, as shown in (13). Namely, we start from the general MATLAB code from (TE05, , p. 373) and modify it to account for the Loewner structure, and hence achieve efficiency. This code computes for a fixed . To compute pseudospectra, one applies this algorithm on a grid of values. In that case, the structured LU factorization in the first line need only be computed once for all values (just as the standard algorithm computes an simultaneous triangularization using the QZ algorithm once for all ).
[L1,U1,piv1] = LUdispPiv(mu,lambda,[V’ L’],[R.’ -W.’]); L1t = L1’; U1t = U1’; z = 1./(z-lambda); Upz = z.*(U1\(L1\V(:,piv1)’)); [L2,U2,piv2] = lu(eye(m)-R*Upz,’vector’); R2 = R(piv2,:); L2t = L2’; U2t = U2’; R2t = R2’; Upzt = Upz’; applyTheta = @(x) x+Upz*(U2\(L2\(R2*x))); applyThetaTranspose = @(x) x+R2t*(L2t\(U2t\(Upzt*x))); sigold = 0; for it = 1:maxit u = U1\(L1\u(piv1)); u = conj(z).*applyThetaTranspose(applyTheta(z.*u)); u(piv1) = L1t\(U1t\u); sig = 1/norm(u); if abs(sigold/sig-1) < 1e-2, break, end u = sig*u; sigold = sig; end sigmin = sqrt(sig);
The function LUdispPiv computes the LU factorization (with partial pivoting) of the Loewner matrix in operations. The Loewner matrix is not formed explicitly; instead, the function uses the raw interpolation data and . The implementation details for LUdispPiv can be found in (GV12, , sect. 12.1). The LU factorization of is given by L2 and U2, while the first three lines of the loop represent the computation of , as defined in (13). Note the careful grouping of terms and the use of elementwise multiplication .* to keep the total operation count at .
- (1) A. C. Antoulas, S. Lefteriu, and A. C. Ionita, A tutorial introduction to the Loewner framework for model reduction, in Model Reduction and Approximation: Theory and Algorithms, P. Benner, A. Cohen, M. Ohlberger, and K. Willcox, eds., SIAM, Philadelphia, 2017, pp. 335–376.
- (2) A. C. Antoulas, D. C. Sorensen, and Y. Zhou, On the decay rate of Hankel singular values and related issues, Systems Control Lett., 46 (2002), pp. 323–342.
- (3) B. Beckermann and A. Townsend, On the singular values of matrices with displacement structure, SIAM J. Matrix Anal. Appl., 38 (2017), pp. 1227–1248.
- (4) F. Chaitin-Chatelin and V. Frayssé, Lectures on Finite Precision Computations, SIAM, Philadelphia, 1996.
- (5) M. Embree, Pseudospectra, in Handbook of Linear Algebra, L. Hogben, ed., CRC/Taylor & Francis, Boca Raton, FL, second ed., 2014, ch. 23.
- (6) M. Embree and B. Keeler, Pseudospectra of matrix pencils for transient analysis of differential–algebraic equations, SIAM J. Matrix Anal. Appl., 38 (2017), pp. 1028–1054.
- (7) V. Frayssé, M. Gueury, F. Nicoud, and V. Toumazou, Spectral portraits for matrix pencils, Tech. Rep. TR/PA/96/19, CERFACS, August 1996.
- (8) I. Gohberg, T. Kailath, and V. Olshevsky, Fast Gaussian elimination with partial pivoting for matrices with displacement structure, Math. Comp., 64 (1995), pp. 1557–1576.
- (9) G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins University Press, Baltimore, fourth ed., 2012.
- (10) N. J. Higham and F. Tisseur, More on pseudospectra for polynomial eigenvalue problems and applications in control theory, Linear Algebra Appl., 351–352 (2002), pp. 435–453.
- (11) D. Hinrichsen and A. J. Pritchard, Real and complex stability radii: a survey, in Control of Uncertain Systems, D. Hinrichsen and B. Mårtensson, eds., Birkhäuser, Boston, 1990.
- (12) R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, second ed., 2013.
- (13) P.-F. Lavallée, Nouvelles Approches de Calcul du -Spectre de Matrices et de Faisceaux de Matrices, PhD thesis, Université de Rennes 1, 1997.
- (14) A. J. Mayo and A. C. Antoulas, A framework for the solution of the generalized realization problem, Linear Algebra Appl., 425 (2007), pp. 634–662.
M. L. Overton,
Numerical Computing with IEEE Floating Point Arithmetic, SIAM, Philadelphia, 2001.
- (16) T. Penzl, A cyclic low-rank Smith method for large sparse Lyapunov equations, SIAM J. Sci. Comput., 21 (2000), pp. 1401–1418.
- (17) , Eigenvalue decay bounds for solutions of Lyapunov equations: the symmetric case, Systems Control Lett., 40 (2000), pp. 139–144.
- (18) K. S. Riedel, Generalized epsilon-pseudospectra, SIAM J. Numer. Anal., 31 (1994), pp. 1219–1225.
- (19) A. Ruhe, The rational Krylov algorithm for large nonsymmetric eigenvalues — mapping the resolvent norms (pseudospectrum). Unpublished manuscript, March 1995.
- (20) S. M. Rump, Eigenvalues, pseudospectrum and structured perturbations, Linear Algebra Appl., 413 (2006), pp. 567–593.
- (21) J. Sabino, Solution of Large-Scale Lyapunov Equations via the Block Modified Smith Method, PhD thesis, Rice University, June 2006.
- (22) F. Tisseur and N. J. Higham, Structured pseudospectra for polynomial eigenvalue problems, with applications, SIAM J. Matrix Anal. A