1 Introduction
Consider a symmetric tridiagonal matrix of size
and denote by its eigenvalues. Assume that the entries of are not known precisely and the only information that we have is that comes from a given interval , , and comes from a given interval , .
By we denote the corresponding interval matrix, that is, the set of all matrices with and . By we denote its midpoint. Next,
stands for the corresponding eigenvalue sets. It was shown in [9] that they form real compact intervals. The problem investigated in this paper is to determine their endpoints. We focus on the upper endpoints s since the lower ones can be determined analogously by the reduction .
Characterization of the extremal eigenvalues and of a general symmetric interval matrix is due to Hertz [7] by a formula involving computation of matrices. A partial characterization of the intermediate eigenvalue intervals was done in [6, 10]. Due to NPhardness of computing or even tightly approximating the eigenvalue sets [8, 19], there were developed various outer and inner approximation methods [1, 9, 10, 11, 14, 15, 17, 20]. The tridiagonal case was particularly investigated by Commerçon [4], who proposed a method for calculating the exact eigenvalue bounds based on the Sturm algorithm. This method, however, suffers from time complexity analysis and relies too much on the particular Sturm algorithm. Our aim is to have a finite reduction to real cases, which can be solved by any eigenvalue method for tridiagonal matrices. Another author investigating tridiagonal interval matrices was Jian [12]
. He proposed a method for computing the extremal eigenvalues by a reduction to four real cases, and he also inspected tridiagonal interval Toeplitz matrices. Our approach Generalizes the result and enables to calculate ranges of all eigenvalue sets under eigenvector sign invariancy condition.
2 Preliminaries
Throughout this paper, inequalities such as “” are applied entrywise. In particular, means that is entrywise nonnegative.
Proposition 1.
Without loss of generality, we can assume that .
Proof.
The transformation increases all eigenvalues of by the amount of . So for any this transformation yields a matrix with a nonnegative diagonal. Thus, we can assume that for every .
Suppose now there is such that . Let be any eigenvalue of and a corresponding eigenvector, that is, . Let be the matrix resulting from by putting , and let . Then for we have
For we have
The remaining two cases are:
and
Thus, has the same eigenvalues as , and the eigenvectors of can easily be derived from those of . By repeating this process, we obtain all s nonnegative. ∎
We can therefore assume that for the interval matrix . Nonnegativity of the diagonal can be achieved by the transformation with , and nonnegativity of the remaining entries by the transformation
We will assume throughout the paper that for all ; otherwise is block diagonal and we split the problem into the subproblems corresponding to the diagonal blocks of .
Proposition 2.
Suppose that for all . Then all eigenvalues of every are simple.
Proof.
It is obvious since it is known that a symmetric tridiagonal matrix has simple eigenvalues provided offdiagonal elements are nonzero [16]. ∎
3 Sign invariancy case
We say that eigenvectors of are sign invariant [5, 18] if to each eigenvalue we can associate an eigenvector such that the signs of the entries of are constant for . In this section, we assume that sign invariancy is satisfied.
The derivative of a simple eigenvalue of a symmetric with respect to is equal to , where , , is the corresponding eigenvector. The derivative is nonnegative with respect to the diagonal entries of , so the largest eigenvalues of are attained for . Notice that similar result holds for general symmetric interval matrices, too [9, 13].
Due to sign invariancy of eigenvectors, we can easily determine also s. Let be the th largest eigenvalue of and the corresponding eigenvector. If , then is attained for . Otherwise, it is attained for . In particular, from the Perron theory and properties of nonnegative matrices, we have that is attained for . The following proposition summarizes the result.
Proposition 3.
is attained for
where is the eigenvector of corresponding to .
Remark 1.
Notice that provided the problem is not sign invariant, then the eigenvalues computed by Proposition 3
give an inner estimation of the eigenvalue intervals
. That is, we have intervals satisfying for every , with equality under sign invariancy.The resulting method is displayed in Algorithm 1 for computing the right endpoints of the eigenvalue intervals; the left endpoint are computed analogously.
As a side effect, we have the following interesting property.
Proposition 4.
is attained for and such that the cardinality of
is .
Proof.
Suppose ; the general case then follows from limit transition due to continuity of eigenvalues. Let , let be its th eigenvalue and a corresponding eigenvector. By [16, Thm. 7.9.2], the sign of is equal to the sign of
where is the characteristic polynomial of the (top left) principal leading submatrix of of size , and . Since , the signs of and coincide. The number of sign agreements between consecutive terms in the Sturm sequence gives the number of roots of which are less than , that is . Sign agreement between consecutive terms in the Sturm sequence corresponds to sign agreement between consecutive terms in the signs of the eigenvector , which in turn sets to be by Proposition 3. Therefore, by the analysis of our method, is equal to the number of , , that we set to the right endpoint. ∎
Time complexity of our algorithm is the following. We need computation of eigenvalues and eigenvectors of the midpoint matrix , then times computation of a certain eigenvalue of a matrix in . The preprocessing carrying the matrix to the nonnegative form (Section 2) requires only linear time. Provided we employ a standard method for computation of eigenvalues of a real symmetric tridiagonal matrix running in , the overall complexity is .
4 General case
As a simple corollary of Proposition 4 we get that is attained for and is attained for . This property, however, holds in the general case and no sign invariancy assumption is needed. By other means, this was observed by Jian [12].
Proposition 5.
and are attained for , and and are attained for .
Proof.
Each of the quantities , , and are computable just by solving one real eigenvalue problem. As a consequence, we have a method for testing the following properties of a symmetric tridiagonal interval matrix , because they reduce to computation of eigenvalues of one or two real symmetric tridiagonal matrices:

positive (semi)definiteness, i.e., whether each is positive (semi)definite; one has to check or , respectively

Schur or Hurwitz stability, i.e., whether each is stable; for Schur stability, one has to check and , and for Hurwitz stability

spectral radius, i.e., the largest spectral radius over ; it has the value of .
5 Checking sign invariancy
Recall Theorem 7.9.3 from Parlett [16] stated in an adapted formulation.
Theorem 1.
If with , then there is no eigenvector such that or .
We can utilize this theorem also for the intermediate entries of eigenvectors.
Proposition 6.
If there is with , and for some eigenvector , then .
Proof.
From we have that is an eigenvector of the principal leading submatrix of of size , and therefore . Similarly for . ∎
The following observation is a basis for the method recognizing sign invariancy. We will denote by the principal submatrix of indexed by .
Proposition 7.
Suppose that . Then the problem is not sign invariant if and only if there is and such that both matrices and share a common eigenvalue.
Proof.
Since , by Proposition 2 the eigenvalues of all are simple, and therefore the corresponding eigenvectors can be chosen in such a way that they constitute continuous mappings with respect to . Thus the problem is not sign invariant if and only if there an eigenvector with zero entry.
Let be the eigenvalue corresponding to an eigenvector . If , then both matrices and have a common eigenvalue , and the eigenvectors are and , respectively.
On the other hand, let and have a common eigenvalue corresponding to eigenvectors and , respectively. Then by Theorem 1, and therefore is the eigenvalue of corresponding to the eigenvector for some . ∎
The method for checking sign invariancy
For any do the following. The index set represents zero entries of an eigenvector. Consider the interval principal submatrices associated with . Compute their inner estimation eigenvalue intervals by Remark 1. If there is a common value , then the problem is not sign invariant by Proposition 7.
If the test passes successfully through every , then the problem is sign invariant. The reason is the following. Let be an eigenvector of any with the most zero entries. Let be the index set of the zero entries. Then the problem becomes sign invariant on principal submatrices , and therefore we must find a common eigenvalue.
Notice that not all the number of index sets are necessary to process. By Theorem 1 and Proposition 6 only certain index sets can be considered. What is the number of such sets? Denote it by . We easily find a Fibonaccitype recurrence relation since either , , or . Therefore asymptotically grows as , which is still exponential, but significantly less than .
The following gives a sufficient condition for sign invariancy. Denote by any superset of the eigenvalue sets of , that is,
Methods for computing such outer estimations were addressed, e.g., in [9, 11, 14, 15, 17].
Proposition 8.
The problem is sign invariant if and for every .
Proof.
It follows from Proposition 7. ∎
6 Special case of disjoint eigenvalue sets
An interval matrix is called regular if every is nonsingular; see [21]. BarOn et al. [2, 3] showed that checking regularity of a tridiagonal interval matrix is a polynomial problem. Their algorithm works analogously even if we restrict to symmetric matrices in . Therefore, checking regularity of a tridiagonal interval matrix can be checked efficiently. As a direct consequence, checking whether a given is an eigenvalue of at least one is a polynomially solvable problem, too.
We use this observation for computing the corresponding eigenvalue sets in the case the eigenvalue sets are mutually disjoint. Compute the inner estimation of the eigenvalue sets by Remark 1. If these intervals are mutually disjoint, then for each check by the above observation whether or belong to the eigenvalue set for a sufficiently small . (To avoid numerical difficulties, one can consider as a parameter.) If it is not the case, then empty pairwise intersection of the eigenvalue sets is confirmed. Eventually, we have for every .
7 Examples
Example 1.
Consider the example from [9, 11, 15, 17]:
First, we transform the matrix into a nonnegative one
The eigenvalues of the midpoint matrix are , , , , and the corresponding eigenvectors are
Based on the signs of the entries of these vectors we can directly conclude that is attained for , and similarly , , are attained as the corresponding eigenvalues of the matrices
respectively. Similarly we proceed for calculating the lower endpoints of the eigenvalue sets. Eventually, we obtain the following exact eigenvalue sets (by using outward rounding)
8 Conclusion
We presented a simple and fast algorithm for computing the eigenvalue ranges of symmetric tridiagonal interval matrices. Impreciseness of measurement and other kinds of uncertainty are often represented in the form of intervals. Therefore, checking various kinds of stability of uncertain systems naturally leads to the problem of determining eigenvalues of interval matrices. In this short note, we improved the time complexity and the overall exposition of the known methods for the symmetric tridiagonal matrix case.
Acknowledgments
The author was supported by the Czech Science Foundation Grant P402/1310660S.
References
 [1] H.S. Ahn, K. L. Moore, and Y. Chen. Monotonic convergent iterative learning controller design based on interval model conversion. IEEE Trans. Autom. Control, 51(2):366–371, 2006.
 [2] I. BarOn. Checking nonsingularity of tridiagonal matrices. Electron. J. Linear Algebra, 6:11–19, 2000.
 [3] I. BarOn, B. Codenotti, and M. Leoncini. Checking robust nonsingularity of tridiagonal matrices in linear time. BIT, 36(2):206–220, 1996.
 [4] J. C. Commerçon. Eigenvalues of tridiagonal symmetric interval matrices. IEEE Trans. Autom. Control, 39(2):377–379, 1994.
 [5] A. Deif and J. Rohn. On the invariance of the sign pattern of matrix eigenvectors under perturbation. Linear Algebra Appl., 196:63–70, 1994.
 [6] A. S. Deif. The interval eigenvalue problem. ZAMM, Z. Angew. Math. Mech., 71(1):61–64, 1991.
 [7] D. Hertz. The extreme eigenvalues and stability of real symmetric interval matrices. IEEE Trans. Autom. Control, 37(4):532–535, 1992.
 [8] M. Hladík. Complexity issues for the symmetric interval eigenvalue problem. Open Math., 13(1):157–164, 2015.

[9]
M. Hladík, D. Daney, and E. Tsigaridas.
Bounds on real eigenvalues and singular values of interval matrices.
SIAM J. Matrix Anal. Appl., 31(4):2116–2129, 2010.  [10] M. Hladík, D. Daney, and E. P. Tsigaridas. Characterizing and approximating eigenvalue sets of symmetric interval matrices. Comput. Math. Appl., 62(8):3152–3163, 2011.
 [11] M. Hladík, D. Daney, and E. P. Tsigaridas. A filtering method for the interval eigenvalue problem. Appl. Math. Comput., 217(12):5236–5242, 2011.
 [12] Y. Jian. Extremal eigenvalue intervals of symmetric tridiagonal interval matrices. Numer. Linear Algebra Appl., 24(2):e2083, 2017.
 [13] L. V. Kolev. Determining the positive definiteness margin of interval matrices. Reliab. Comput., 13(6):445–466, 2007.
 [14] L. V. Kolev. Eigenvalue range determination for interval and parametric matrices. Int. J. Circuit Theory Appl., 38(10):1027–1061, 2010.
 [15] H. Leng. Real eigenvalue bounds of standard and generalized real interval eigenvalue problems. Appl. Math. Comput., 232:164–171, 2014.
 [16] B. N. Parlett. The symmetric eigenvalue problem. SIAM, Philadelphia, unabridged, corrected republication of 1980 edition, 1998.
 [17] Z. Qiu, S. Chen, and I. Elishakoff. Bounds of eigenvalues for structures with an interval description of uncertainbutnonrandom parameters. Chaos Soliton. Fract., 7(3):425–434, 1996.
 [18] J. Rohn. Interval matrices: Singularity and real eigenvalues. SIAM J. Matrix Anal. Appl., 14(1):82–91, 1993.
 [19] J. Rohn. Checking positive definiteness or stability of symmetric interval matrices is NPhard. Commentat. Math. Univ. Carol., 35(4):795–797, 1994.
 [20] J. Rohn. An algorithm for checking stability of symmetric interval matrices. IEEE Trans. Autom. Control, 41(1):133–136, 1996.
 [21] J. Rohn. Forty necessary and sufficient conditions for regularity of interval matrices: A survey. Electron. J. Linear Algebra, 18:500–512, 2009.
Comments
There are no comments yet.