    # An Overview of Polynomially Computable Characteristics of Special Interval Matrices

It is well known that many problems in interval computation are intractable, which restricts our attempts to solve large problems in reasonable time. This does not mean, however, that all problems are computationally hard. Identifying polynomially solvable classes thus belongs to important current trends. The purpose of this paper is to review some of such classes. In particular, we focus on several special interval matrices and investigate their convenient properties. We consider tridiagonal matrices, M,H,P,B-matrices, inverse M-matrices, inverse nonnegative matrices, nonnegative matrices, totally positive matrices and some others. We focus in particular on computing the range of the determinant, eigenvalues, singular values, and selected norms. Whenever possible, we state also formulae for determining the inverse matrix and the hull of the solution set of an interval system of linear equations. We survey not only the known facts, but we present some new views as well.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Many problems in interval computation are computationally hard; see theoretic complexity surveys in [23, 18]. Nevertheless, matrices arising in practical problems are not random, but satisfy some special properties and have specific structures. Utilizing such particularities is often very convenient and can make tractable those problems that are hard on general. In this paper, we review such special matrices and easily computable characteristics.

### General notation

For a symmetric matrix , we denote its eigenvalues as . For any matrix , we use for the spectral radius, and and the smallest and the largest singular value, respectively. Further, stand for the diagonal matrix with entries , the symbol

is for the identity matrix of size

, and

for an all-ones vector of convenient dimension. The

th row and the th column of a matrix are denoted by and , respectively. Throughout the text, inequalities between vectors and matrices well as the absolute values and min/max functions are understood entrywise.

The regularity radius [23, 31] of a nonsingular matrix is the distance to the nearest singular matrix in the Chebyshev norm (componentwise maximum norm) and denoted

 r(A)\coloneqqmin{δ≥0;∃ singular B∈Rn×n:|aij−bij|≤δ ∀i,j}.

This value can be expressed as , where

 ∥M∥∞,1\coloneqqmax∥x∥∞=1∥Mx∥1

is the matrix norm induced by the vector - and 1-norms. Computing this norm is, however, an NP-hard problem on the set of symmetric rational M-matrices [11, 35]. The best known approximation is by means of semidefinite programming .

### Interval notation

An interval matrix is defined as

 A\coloneqq{A∈Rm×n;–A≤A≤¯¯¯¯A},

where and , , are given matrices. The corresponding midpoint and the radius matrices are defined respectively as

 Ac\coloneqq12(A––+¯¯¯¯A),AΔ\coloneqq12(¯¯¯¯A−A––).

The set of all interval matrices is denoted by , and intervals and interval vectors are considered as special cases of interval matrices. For interval arithmetic, we refer the reader, e.g., to Neumaier . Given with and symmetric, we denote by the corresponding symmetric interval matrix.

For a bounded set , the interval hull is the smallest enclosing interval vector, or more formally, .

Consider an interval system of linear equations , where and . Its solutions set is traditionally defined as the union of all solutions of realizations of interval coefficients, that is

 Σ\coloneqq{x∈Rn;∃A∈A,∃b∈b:Ax=b}.

Consider any matrix property . We say that an interval matrix satisfies is every satisfies . This applies in particular to regularity (every is nonsingular), positive definiteness, M-matrix property, nonnegativity and others. Recall that checking whether an interval matrix is regular is a co-NP-hard problem [11, 23, 31].

For a real function and an interval matrix , the image of the interval matrix under the function is

 f(A)={f(A);A∈A}.

In general, needn’t be an interval, but it is the case provided is continuous. Thus, for instance, gives the range of determinant of or gives the range of the largest eigenvalues of the symmetric interval matrix .

## 2 Tridiagonal matrices

Tridiagonal interval matrices have particularly nice properties and some NP-hard problems become polynomial in this class. Let be a tridiagonal interval matrix, that is, for . Checking regularity of can be performed in linear time (Bar-On et al. ). However, there are still some open problems. Are polynomially solvable the following tasks?

• computing the exact range for the determinant,

• tight enclosure of the solution set of an interval linear system ,

• computing the eigenvalue sets of a symmetric tridiagonal interval matrix,

• computing .

## 3 M-matrices and H-matrices

Interval M-matrices and H-matrices are particularly convenient in the context of solving interval linear equations . Recall that is an M-matrix if for every and . The condition can be equivalently formulated as any of the following conditions

• all real eigenvalues are positive,

• real parts of all eigenvalues are positive,

• there is such that .

. Due to the statement below, interval M-matrices constitute an easily verifiable regular interval matrices.

###### Theorem 1.

An interval matrix is an M-matrix if and only if is an M-matrix and for all .

A matrix is called an H-matrix, if the so called comparison matrix is an M-matrix, where and for . Speciasublasses of H-matrices were discussed, e.g., in Cvetković et al. .

Also interval H-matrices are easy to characterize; see Neumaier . We have that is an H-matrix if and only if is an M-matrix, where the notion of the comparison matrix is extended to interval matrices as follows

 ⟨A⟩ii =mig(aii)=min{|a|; a∈aii}, ⟨A⟩ij =−mag(aii)=−max{|a|; a∈aij},i≠j.

Each diagonally dominant matrix is an H-matrix. So we do not investigate diagonally dominant matrices in particular since what we show for H-matrices holds for diagonally dominant matrices as well.

Each M-matrix is also an H-matrix, so the following results apply to both. By Alefeld , for an H-matrix , the interval Gaussian elimination can be carried out without any pivoting and does not fail. Moreover, for any H-matrix we always find an LU decomposition . That is, there are lower and upper triangular interval matrices such that the diagonal of consists of ones, and .

Provided that is an M-matrix, and one of , , or holds true, then the interval Gaussian elimination yields the interval hull of the solution set, i.e., ; see [5, 6] and Section 4 for a more general result. For a general H-matrix, this needn’t be true, however, for any H-matrix , interval hull of the solution set is polynomially solvable by the so called Hansen–Bliek–Rohn–Ning–Kearfott method; see, e.g., [11, 29, 28].

A link between regularity and H-matrix property was given by Neumaier [27, Prop. 4.1.7].

###### Theorem 2.

Let be an M-matrix. Then is regular if and only if it is an H-matrix.

Notice that the assumption cannot be weakened to the assumption that is an H-matrix. For example, the interval matrix

 A=([0,10]1−110)

is regular and its midpoint is an H-matrix. However, itself is not an H-matrix, failing for the realization when the top left entry vanishes.

As a consequence, we have a result related to positive definiteness. Checking positive definiteness of interval matrices is co-NP-hard [23, 33], so polynomial recognizable sub-classes are of interest.

###### Theorem 3.

Let be an H-matrix and positive definite. Then is positive definite.

###### Proof.

By [23, 34], positive definiteness of and regularity of implies positive definiteness of . ∎

###### Theorem 4.

Let be a (symmetric) positive definite M-matrix. Then is positive definite if and only if it is an H-matrix.

###### Proof.

By [23, 34], under the assumption of positive definiteness of , we have that is positive definite if and only if it is regular, which is equivalent to H-matrix property by Theorem 2. ∎

###### Theorem 5.

Let be an M-matrix. Then .

###### Proof.

The derivative of the determinant is . For an M-matrix both the determinant and the inverse are nonnegative, so the determinant is a nondecreasing function in each component. ∎

Since each M-matrix is inverse nonnegative, Theorems 9 and 10 from Section 4 below are valid also for interval M-matrices.

## 4 Inverse nonnegative matrices

Besides the generalization to H-matrices, M-matrices can also be extended to inverse nonnegative matrices, that is, matrices such that . Interval inverse nonnegativity is still easy to characterize just by reduction to two point matrices and only; see Kuttler .

###### Theorem 6.

An interval matrix is inverse nonnegative if and only if and .

For inverse nonnegative matrices we can easily determine the range of their inverses. The theorem below says that .

###### Theorem 7.

If is inverse nonnegative, then for every .

When an interval matrix is inverse nonnegative, then interval systems are efficiently solvable. The interval hull of the solution se reads

• when ,

• when ,

• when .

In the other cases, is still polynomially computable, but has no such an explicit formulation; see Neumaier .

For symmetric inverse nonnegative matrices we have also a simple formula for its smallest eigenvalue. Notice that for the largest eigenvalue an analogy is not valid in general.

###### Theorem 8.

Let be inverse nonnegative and both and symmetric. Then .

###### Proof.

Let . Then by the Perron theorem and theory of nonnegative matrices, and similarly for the upper bound. ∎

Analogously, we obtain:

###### Theorem 9.

If is inverse nonnegative, then .

###### Theorem 10.

If is inverse nonnegative, then , where .

###### Proof.

Analogously to the proof of Theorem 5 we use that the derivative of the determinant is . The determinant must have a constant sign, and , so the minimal and maximal determinants are attained for or . ∎

The above theorem can simply be extended to sign stable matrices, which are those interval matrices satisfying ; see Rohn and Farhadsefat . The signs of the entries say if the determinant is nonincreasing or nondecreasing. Therefore, the left/right endpoint of is attained for a matrix defined as if and otherwise.

For the regularity radius, we have:

###### Theorem 11.

If is inverse nonnegative, then .

###### Proof.

Let . By Theorem 7, , and similarly from below. ∎

## 5 Totally positive matrices

A matrix is totally positive if the determinants of all submatrices are positive. Despite the definition, checking this property is a polynomial problem; see Fallat and Johnson .

Let . First we show a correspondence between total positivity of and inverse nonnegativity. Denote of a convenient length.

###### Theorem 12.

If is totally positive, then is inverse nonnegative.

###### Proof.

The inverse of can be expressed as , where the entries of the adjugate matrix are defined as , and arises from by removing the th row and the th column. Thus is inverse sign stable corresponding to the checkerboard order and therefore is inverse nonnegative. ∎

From the above theorem, we can easily derive many useful properties of totally positive interval matrices based on the results presented in Section 4.

Also total positivity of an interval matrix can also be verified in polynomial time just by reducing the problem to two vertex matrices defined by the checkerboard order. Define as follows

 ↓A\coloneqqAc−diag(s)AΔdiag(s),↑A\coloneqqAc+diag(s)AΔdiag(s).

In relation to Theorem 12, these matrices can also be expressed as

Then we have all ingredients to state the result by Garloff :

###### Theorem 13.

is totally positive if and only if and are totally positive.

As consequences, we obtain the following properties.

###### Corollary 1.

If is totally positive, then and .

###### Proof.

The formula for follows from Theorems 9 and 12. The formula for will be shown in Theorem 21 under weaker assumptions; notice that here is componentwisely nonnegative. ∎

###### Corollary 2.

If is totally positive, then , where .

###### Proof.

It follows from Theorems 10 and 12. ∎

###### Corollary 3.

If is totally positive, then .

###### Proof.

It follows from Theorems 11 and 12. ∎

Totally positive matrices have distinct positive eigenvalues the properties of which enable us to compute the eigenvalue ranges of interval matrices.

###### Theorem 14.

If is totally positive, then and .

###### Proof.

Let and let

be the right and left eigenvectors of

corresponding to the smallest eigenvalue and normalized such that . By Fallat & Johnson , the signs of both vectors and alternate, so we can assume that both have the sign vector given by defined above, that is, . The derivative of with respect to is , so the maximum is attained for and similarly for the minimum.

The second formula follows from Perron theory of eigenvalues of nonnegative matrices. For each we have , and similarly of the lower bound. ∎

Even more, we can easily compute eigenvalue sets for any other . By Fallat & Johnson , the signs of both left and right eigenvectors corresponding to are constant for every (eigenvalues of principal submatrices of size strictly interlace eigenvalues of , so no eigenvector has a zero entry). Therefore, we can proceed as follows. Let and , , be the eigenvectors corresponding to . Then , where and are defined as

 A1 =Ac−diag(sgn(x))AΔdiag(sgn(y)), A2 =Ac+diag(sgn(x))AΔdiag(sgn(y)).

Consider now an interval system with totally positive. Denote and . Denote by the checkerboard order, that is, iff . Eventually, the interval vector with induced by the checkerboard order is defined as

 [v1,v2]∗\coloneqqdiag(s)[diag(s)v1,diag(s)v2].

Then interval hull of the solution se reads

• when ,

• when ,

• when .

For an extension of totally positive matrices to the so called sign regular matrices with a prescribed signature; see Garloff et al. .

Notice that totally positive matrices are componentwisely nonnegative, so all results from Section 8 are valid for totally positive matrices, too.

## 6 P-matrices

A square real matrix is a P-matrix if all its principal minors are positive. The problem of checking whether a given matrix is a P-matrix is co-NP-hard [8, 23]. Fortunately, there are several effectively recognizable sub-classes of P-matrices, such as positive definite matrices, totally positive matrices, (inverse) M-matrices or more generally H-matrices with positive diagonal entries. By Białas and Garloff , an interval matrix is a P-matrix if and only if is a P-matrix for each .

Positive definiteness is easily verifiable for real matrices, but for interval ones it is co-NP-hard [23, 33], so they do not constitute a polynomial sub-class of interval P-matrices. On the other hand, totally positive matrices, M-matrices or H-matrices with positive diagonal are such a sub-class, as we already observed above. The following result shows that as long as the midpoint matrix of an interval P-matrix is an H-matrix, then itself must be an H-matrix.

###### Theorem 15.

Let be an M-matrix. Then is a P-matrix if an only if it is an H-matrix.

###### Proof.

“If.” It is obvious. Notice that every matrix in must have positive diagonal.

“Only if.” Since is an M-matrix and is regular, the interval matrix must be an H-matrix in view of Theorem 2. ∎

In Hladík , it was shown that an interval matrix with either or diagonal is a P-matrix if and only if is a P-matrix. This reduces the problem to just one case, which is however still hard to check in general.

Let us mention one more polynomially decidable subclass of interval P-matrices. A matrix is a B-matrix if

 n∑j=1aij>0and1nn∑j=1aij>aik ∀i≠k.

Any B-matrix is a P-matrix; see Peña . For an interval matrix , B-matrix property is easily checked by adapting the above characterization.

###### Theorem 16.

is a B-matrix if and only if

 n∑j=1a––ij>0and∑j≠ka––ij>(n−1)¯¯¯aik ∀i≠k.

## 7 Diagonally interval matrices

We say that an interval matrix is diagonally interval if is diagonal. These matrices are still intractable from many viewpoints. As shown in Rump , checking P-matrix property, which is co-NP-hard, can be reduced to checking regularity of an interval matrix with . Therefore, checking regularity of a diagonally interval matrix is co-NP-hard as well. Similarly, there will be hard many problems related to solving interval linear equations.

On the other hard, regularity turns out to be tractable as long as is symmetric. Moreover, we can effectively determine all eigenvalues of . The following theorem extends the result from Hladík .

###### Theorem 17.

Let be diagonally interval and symmetric. Then for every .

###### Proof.

By the Courant–Fischer theorem we have for every

 λi(A)=maxS:dim(S)=i minx∈S,∥x∥=1xTAx≤maxS:dim(S)=i minx∈S,∥x∥=1xT¯¯¯¯Ax=λi(¯¯¯¯A),

and similarly for the lower bound. ∎

As a simple consequence, we have:

###### Corollary 4.

Let be diagonally interval and symmetric. Then

Since the upper bounds for the eigenvalues intervals are attained for the same matrix and analogously for the lower bounds, we get as a consequence simple formula for the range of the determinant provided is positive semidefinite. This is not the case for a general diagonally interval matrix.

###### Corollary 5.

Let be diagonally interval and symmetric positive semidefinite. Then .

In Kosheleva et al. , it was shown that computing the cube of an interval matrix is an NP-hard problem. Here, we show that it is a polynomial problem provided is diagonally interval. The cube is naturally defined as . It needn’t be an interval matrix, so the problem practically is to determine the interval matrix .

We will compute the cube entrywise. Let and suppose that ; the case is dealt with analogously. Then the problem is to determine the range of on , . This function is linear in for , so we can fix the values of these parameters on the lower or upper bounds, depending on the signs of the corresponding coefficients. Thus the function reduces to a quadratic function of variables and only. This can be resolved by brute force by binary search or by utilizing optimality criteria from mathematical programming – notice that we minimize/maximize quadratic function on a two-dimensional rectangle.

Therefore, we have:

###### Theorem 18.

Computing is a polynomial problem for diagonally interval.

## 8 Nonnegative matrices

For a (componentwisely) nonnegative matrix , the Perron theory says that its spectral radius is attained as the eigenvalue. Let . Obviously, it is nonnegative if and only if . In some situations, however, it is not necessary to assume that all matrices in are nonnegative, but it is sufficient to assume that . First, we consider the spectral radius.

###### Theorem 19.

We have:

1. If , then .

2. If is nonnegative, then .

###### Proof.

For every , , whence . If in addition , then for every . ∎

Analogously, we obtain:

###### Theorem 20.

We have:

1. If , then .

2. If is nonnegative, then .

###### Theorem 21.

If is nonnegative, then .

Recall that a matrix norm is monotone if implies . This is satisfied for most of the norms used. For instance, any induced -norm, norm, Frobenius norm or the Chebyshev norm are monotone.

###### Theorem 22.

For every monotone matrix norm we have

1. If , then .

2. If is nonnegative, then .

###### Proof.

For every , we have , and therefore . If in addition , then for every . ∎

Nonnegative matrices are also useful for computing high powers of them. Recall that by definition, . Notice that not every matrix in is achieved as the th power of some , so is not an interval matrix.

###### Theorem 23.

If is nonnegative, then .

###### Proof.

Obviously, for every , we have . ∎

## 9 Inverse M-matrices

A matrix is an inverse M-matrix  if is nonsingular and is an M-matrix. This represents another easily recognizable sub-class of P-matrices. Recall that a vertex matrix of is a matrix such that for all . Johnson and Smith [20, 21] showed that is an inverse M-matrix if and only if all vertex matrices are. This reduces the problem to real matrices. Neither a polynomial reduction is know, nor NP-hardness was proved. So the computational complexity of checking whether an interval matrix is an inverse M-matrix is an open problem. It is also worth mentioning the result by Poljak and Rohn [31, 11], who showed that checking regularity of an interval matrix is co-NP-hard even when is a symmetric inverse M-matrix.

Since an inverse M-matrix is nonnegative, all results from Section 8 are valid in this context, too.

For the componentwise range of inverse matrices, we have the following observation reducing the problem to real matrices.

###### Theorem 24.

If is an inverse M-matrix, then

 minA∈A A−1 =min{(Ac+diag(zi)AΔdiag(zj))−1;i,j=1,…,n}, maxA∈A A−1 =max{(Ac−diag(zi)AΔdiag(zj))−1;i,j=1,…,n},

where the minimum is understood componentwisely and has in the th entry and elsewhere.

###### Proof.

The derivative of the inverse is , or in a matrix form, . It has constant signs, so the minimum value of is attained for the matrix , and analogously the maximum. ∎

This characterization leads us to the open problem:

###### Conjecture 1.

is an inverse M-matrix if and only if , , are inverse M-matrices.

It is also an open question whether interval systems of linear equations can be solved efficiently provided is inverse M-matrix. Anyway, we can state a partial result concerning the interval hull of the solution set.

###### Theorem 25.

If is an inverse M-matrix, then is attained for , and is attained for .

###### Proof.

Let , and . Then

 xi=A−1i∗b=n∑j=1(A−1)ijbj≥(A−1)iib–j+∑j≠i(A−1)ij¯¯bj=A−1i∗(bc+diag(zi)bΔ).

Similarly for the upper bound. ∎

###### Theorem 26.

If is an inverse M-matrix, then , where

 A1ij={A––iiif i=j,¯¯¯¯Aijif i≠j,A2ij={¯¯¯¯Aiiif i=j,A––ijif i≠j.
###### Proof.

Similar to the proof of Theorem 5. The derivative of the determinant is . The determinant itself is positive, the diagonal of is positive, and its offdiagonal is nonpositive. ∎

## 10 Parametric matrices

A parametric matrix extends the notion of an interval matrix to a broader class of matrices. A linear parametric matrix is a set of matrices

 A(p)=K∑k=1A(k)pk,

where are fixed matrices and are parameters varying respectively in . In short, we will denote it as .

Since many problems are intractable for standard interval matrices, handling parametric matrices is much more difficult task. On the other hand, there are several tractable cases, which we will be concerned with now.

###### Theorem 27.

is positive definite if and only if is positive definite for each such that .

This reduced the problem to checking positive definiteness of real matrices. Provided is fixed, we arrived at a polynomial method for checking positive definiteness of .

Consider now a parametric system of linear equations

 A(p)x=b(p),

where is a linear parametric right-hand side vector. The corresponding solution set is defined as

 \largeΣp\coloneqq{x∈Rn;∃p∈p:A(p)x=b(p)}.

In contrast to ordinary interval linear systems, characterizing this solution set is a tough problem [2, 3, 25] even for some particular linear systems. Nevertheless, there are some easy-to-handle situations. By Mohsenizadeh et al. , under a rank one assumption, we have a reduction to real systems, which is tractable for fixed number of parameters.

###### Theorem 28.

If for every , and there are no cross dependencies between the constraint matrix and the right-hand side (i.e., ), then the extremal values of are attained for , .

Another reduction to real linear systems can be performed based on the result by Popova .

###### Theorem 29.

If each parameter is involved in one equation only, then is described by

 |A(pc)x−b(pc)|≤K∑k=1pΔk|A(k)x−b(k)|.

Let and consider the restriction of to the set described by , . This restricted set has simplified description

 A(pc)x−b(pc) ≤K∑k=1pΔkzk(A(k)x−b(k)), −A(pc)x+b(pc) ≤K∑k=1pΔkzk(A(k)x−b(k)), zk(A(k)x−b(k)) ≥0,k=1,…,K.

This is a system of linear inequalities, which is efficiently processed via linear programming. Again, we got a reduction to

, which is a polynomial case provided is fixed.

## Conclusion

In this paper, we briefly surveyed interval versions of selected special types of matrices and their useful properties. In particular, we highlighted the properties and characteristics that are efficiently computable even in the interval context. We were motivated by the fact that matrices appearing in applications are not general, but usually have some special structure. Utilizing this special form may in turn radically reduce computational complexity of problems involving the matrices.

## References

•  Alefeld, G.: Über die Durchführbarkeit des Gaußschen Algorithmus bei Gleichungen mit Intervallen als Koeffizienten. Comput. Suppl. 1, 15–19 (1977)
• 

Alefeld, G., Kreinovich, V., Mayer, G.: On the shape of the symmetric, persymmetric, and skew-symmetric solution set.

SIAM J. Matrix Anal. Appl. 18(3), 693–705 (1997)
•  Alefeld, G., Kreinovich, V., Mayer, G.: On the solution sets of particular classes of linear interval systems. J. Comput. Appl. Math. 152(1-2), 1–15 (2003)
•  Bar-On, I., Codenotti, B., Leoncini, M.: Checking robust nonsingularity of tridiagonal matrices in linear time. BIT 36(2), 206–220 (1996)
•  Barth, W., Nuding, E.: Optimale Lösung von Intervallgleichungssystemen. Comput. 12, 117–125 (1974)
•  Beeck, H.: Zur scharfen Aussenabschätzung der Lösungsmenge bei linearen Intervallgleichungssystemen. ZAMM, Z. Angew. Math. Mech. 54, T208–T209 (1974)
•  Białas, S., Garloff, J.: Intervals of P-matrices and related matrices. Linear Algebra Appl. 58, 33–41 (1984)
•  Coxson, G.E.: The P-matrix problem is co-NP-complete. Math. Program. 64, 173–178 (1994)
•  Cvetković, L., Kostić, V., Rauški, S.: A new subclass of H-matrices. Appl. Math. Comput. 208(1), 206–210 (2009)
•  Fallat, S.M., Johnson, C.R.: Totally Nonnegative Matrices. Princeton University Press, Princeton, NJ (2011)
•  Fiedler, M., Nedoma, J., Ramík, J., Rohn, J., Zimmermann, K.: Linear Optimization Problems with Inexact Data. Springer, New York (2006)
•  Garloff, J.: Criteria for sign regularity of sets of matrices. Linear Algebra Appl. 44, 153–160 (1982)
•  Garloff, J., Adm, M., Titi, J.: A survey of classes of matrices possessing the interval property and related properties. Reliab. Comput. 22, 1–10 (2016)
•  Hartman, D., Hladík, M.: Tight bounds on the radius of nonsingularity. In: M. Nehmeier et al. (ed.) Scientific Computing, Computer Arithmetic, and Validated Numerics: 16th International Symposium, SCAN 2014, Würzburg, Germany, September 21-26, LNCS, vol. 9553, pp. 109–115. Springer (2016)
•  Hladík, M.: Complexity issues for the symmetric interval eigenvalue problem. Open Math. 13(1), 157–164 (2015)
•  Hladík, M.: On relation between P-matrices and regularity of interval matrices. In: N. Bebiano (ed.) Applied and Computational Matrix Analysis, Springer Proceedings in Mathematics & Statistics, vol. 192, pp. 27–35. Springer (2017)
•  Hladík, M.: Positive semidefiniteness and positive definiteness of a linear parametric interval matrix (2018)
•  Horáček, J., Hladík, M., Černý, M.: Interval linear algebra and computational complexity. In: N. Bebiano (ed.) Applied and Computational Matrix Analysis, Springer Proceedings in Mathematics & Statistics, vol. 192, pp. 37–66. Springer (2017)
•  Horn, R.A., Johnson, C.R.: Topics in matrix analysis. Cambridge University Press (1991)
•  Johnson, C.R., Smith, R.L.: Intervals of inverse M-matrices. Reliab. Comput. 8(3), 239–243 (2002)
•  Johnson, C.R., Smith, R.L.: Inverse M-matrices, II. Linear Algebra Appl. 435(5), 953–983 (2011)
•  Kosheleva, O., Kreinovich, V., Mayer, G., Nguyen, H.: Computing the cube of an interval matrix is NP-hard. In: Proceedings of the ACM Symposium on Applied Computing, vol. 2, pp. 1449–1453 (2005)
•  Kreinovich, V., Lakeyev, A., Rohn, J., Kahl, P.: Computational Complexity and Feasibility of Data Processing and Interval Computations. Kluwer, Dordrecht (1998)
•  Kuttler, J.: A fourth-order finite-difference approximation for the fixed membrane eigenproblem. Math. Comput. 25(114), 237–256 (1971)
•  Mayer, G.: Three short descriptions of the symmetric and of the skew-symmetric solution set. Linear Algebra Appl. 475, 73–79 (2015)
•  Mohsenizadeh, D.N., Keel, L.H., Bhattacharyya, S.P.: An extremal result for unknown interval linear systems. IFAC Proceedings Volumes 47(3), 6502–6507 (2014)
•  Neumaier, A.: Interval Methods for Systems of Equations. Cambridge University Press, Cambridge (1990)
•  Neumaier, A.: A simple derivation of the Hansen-Bliek-Rohn-Ning-Kearfott enclosure for linear interval equations. Reliab. Comput. 5(2), 131–136 (1999)
•  Ning, S., Kearfott, R.B.: A comparison of some methods for solving linear interval equations. SIAM J. Numer. Anal. 34(4), 1289–1305 (1997)
•  Peña, J.M.: A class of -matrices with applications to the localization of the eigenvalues of a real matrix. SIAM J. Matrix Anal. Appl. 22(4), 1027–1037 (2001)
•  Poljak, S., Rohn, J.: Checking robust nonsingularity is NP-hard. Math. Control Signals Syst. 6(1), 1–9 (1993)
•  Popova, E.D.: Explicit characterization of a class of parametric solution sets. Comptes Rendus de L’Academie Bulgare des Sciences 62(10), 1207–1216 (2009)
•  Rohn, J.: Checking positive definiteness or stability of symmetric interval matrices is NP-hard. Commentat. Math. Univ. Carol. 35(4), 795–797 (1994)
•  Rohn, J.: Positive definiteness and stability of interval matrices. SIAM J. Matrix Anal. Appl. 15(1), 175–184 (1994)
•  Rohn, J.: Computing the norm is NP-hard. Linear Multilinear Algebra 47(3), 195–204 (2000)
•  Rohn, J., Farhadsefat, R.: Inverse interval matrix: A survey. Electron. J. Linear Algebra 22, 704–719 (2011)
•  Rump, S.M.: On P-matrices. Linear Algebra Appl. 363, 237–250 (2003)