Nonsingularity of a matrix is well known to be an important property in many applications of linear algebra. Let us mention an important area of systems stability studied for example for linear time-invariant dynamical systems with parameters having uncertain values . Within these systems questions like distance to instability or minimum stability degrees are solved. This means that more than an actual regularity of a matrix we are interested in the distance to the nearest singular matrix. To express this problem in more realistic terms let be a matrix containing results of measurements and subsequent computations. Even putting numerical problems aside there can be uncertainty in each element of the matrix . To be able to efficiently account for variation of the original matrix determining a distance to a singular one it is more suitable to introduce fewer parameters. Let us start with a simple case introducing just one parameter and considering any matrices having their values componentwisely between and , where
is the vector of ones. In the language of interval linear algebra, we can consider the following interval matrix . An interval matrix is called regular is it consists merely of nonsingular matrices; otherwise, it is called singular (or irregular). The search for the distance to a singular matrix can be rephrased as a search for the minimal such that the interval matrix becomes singular.
Considerations in the above paragraph give rise to the following definition. Let be a square real value matrix. Then the regularity radius, sometimes called the radius of nonsingularity, can be defined using the following definition
where . This definition allows us to produce matrices via elementwise perturbation of the original elements. An interval of such perturbation is the same for all elements. This approach can be too restrictive considering possible application of the regularity radius. For these reasons, a generalized version of the above definition is often considered. Let be a square real value non-negative matrix. Then a generalized version of the regularity radius is
There exist several useful properties of regularity radius in literature that are helpful when considered matrix can be constructed using algebraic expression of, in some sense, simpler matrices. Let us mention these properties for sake of completeness. Let , and be square real valued matrices and let be a square real valued non-negative matrix. The following assertions are true .
The radius of sum of matrices cannot exceed the sum of individual radii, i.e.,
The larger radius matrix, the smaller regularity radius, i.e.,
Multiplication by a constant does not significantly affect complexity of the computation, i.e.,
Determining the regularity radius using directly one of its definitions is complicated even considering the simpler version. For a straight computation, Poljak and Rohn have shown an analytical formula  which reads as
where is a diagonal matrix having as its diagonal, i.e., and for , and
is the real spectral radius providing maximum from absolute values of real eigenvalues of the matrix and equal to 0 if no such eigenvalue exists. This equation has been proven by Poljak and Rohn using one of the equivalent conditions for regularity of an interval matrix . Considering to be a matrix , i.e., a matrix consisting of all ones, and substituting this to the above defined formula results in the following simpler form :
where is the matrix norm defined as
For a general matrix , a computation of the above defined norm has been shown to be an NP-hard problem , and consequently also the problem of computing the regularity radius of is NP-hard. This result has motivated two commonly studied approaches to handle computation of the regularity radius. The first approach is to develop various bounds for its values, while the second approach utilizes approximation algorithms.
Considering the first approach, the corresponding bounds are mostly based on utilization of different norms of original matrix or variations of its spectral radius. One of the first results providing simple bounds has been proven by Demmel  using Perron-Frobenius theorem and block Gaussian elimination, see these bounds in following equation
Following this work, Rump  has shown that for an interval matrix with a central matrix
having its singular values ordered asa condition implies regularity of the original interval matrix . He has also mentioned another criterion for regularity of interval matrix that reads as , where stands for the standard spectral radius of a matrix. Following these criteria several bounds can be produced. Already Rump in the mentioned work  has provided two lower bounds based on the above given conditions along with their mutual comparison, see the resulting bounds below:
Conditions about regularity of an interval matrix have been later extended by Rohn  who, besides the above defined lower bound, provided a new upper bound defined as follows:
Let us note that this upper bound has also be reproven by Rump . In the same work, Rump has shown that for nonnegative invertible matrices, e.g., for M-matrices, the regularity radius is equal to the mentioned lower bound:
Motivated by this result he has also utilized this lower bound as a tool to tighten interval for the regularity radius. Tightening is realized via adopting the following equation determining parametrically the upper and the lower bounds
He has shown that , which proves and extends the conjecture given by Demmel. He has also conjectured that , supported by the property that no upper bound can be better than this function. Finally, it has been shown that this upper bound is sharp up to a constant factor as .
As mentioned above, we can use different approach and utilize approximation algorithms to compute values of regularity radius. There is one method of Kolev . He makes use of interval analysis namely for the search of real maximum magnitude eigenvalue of an associated generalized eigenvalue problem. His solution is an iterative process requiring several conditions on the whole system. Considering computational complexity, this method is not a priori exponential, but requires some sign invariancy of the corresponding eigenvalue problem.
Another method is represented by the recent result of Hartman and Hladík  that provides a randomized algorithm for computing based on a semidefinite approximation of the corresponding matrix norm (5). As such a semidefinite relaxation can be solved with arbitrary a priori given precision , see ; this result provides the following bounds for matrix norm when considering an arbitrary matrix :
where is an optimal solution for a norm and is the Goemans-Williamson value characterizing the approximation ratio of their approximation algorithm for MaxCut .
There are some extensions of the basic notion of regularity radius. One example is represented by an extension of the regularity radius to structured perturbations pioneered by Kolev  and further studied in . Another extension to a radius of (un)solvability of linear systems was presented by Hladík and Rohn .
1.1 Structure of the paper
The results presented in this paper follow the usual process of analyzing the regularity radius of a given matrix and provide thus suggestions to employ effective processing. There are two definitions of radius of regularity – the simpler one, see equation (1), and more general one, see equation (2). This work contributes to bounding or computing both types. While the paper is organized according to expected process of analyzing regularity radius results for both types are sometimes mixed. We always indicate which type of generality is considered on appropriate places of the paper.
One of the first questions when analyzing the regularity radius should be concerned with utilization of possibly special type of the corresponding matrix. The special type of matrices might be expected due to a frequent determination by a particular structure by the problem under study – consider common occurrences of tridiagonal matrices in real world problems. This area is surprisingly less explored although some of the results are easy to achieve and strong at the same time. Section 2 presents some observations concerning special types of matrices. The first subsection 2.1 presents utilization of the algorithm developed by by Bar-On, Codenotti and Leoncini  that represents a polynomial time algorithm to compute the regularity radius for tridiagonal matrices. What follows are results providing exact formulas for various special types of matrices such as totally positive matrices in Section 2.2 and inverse nonnegative matrices 2.3. For another special class represented by a rank one radius matrix described in Section 2.4 the regularity radius computation in a general form, see equation (2), can be reduced to computation in simpler form, see equation (1), as described in Section 2.4. Moreover, this class inspired us also to design an approximation algorithm which is described in Section 2.5. This closes the Section 2 that concerns with special type of matrices.
The following sections assume that we have a general type of a matrix, or we are not aware how exactly our system’s structure can be utilized. In these cases we need to apply steps considering general settings. One of the first questions that might be relevant in its application is checking finiteness of the regularity radius. Section 3 provides arguments showing that testing unboundedness of the regularity radius is a polynomial problem. This might be helpful in situations, where we need to exclude extreme cases. Having finiteness decided one is usually willing to apply one of the simple bounds that is sufficient in considered application. There are several bounds known in the literature – the commonly used bounds are reviewed above. Providing more freedom of choice, another bounds based on relationship between the maximum (Chebyshev) norm and the spectral norm is presented in Section 4. Often, there are situations where no bounds are applicable and we need to compute exact or more accurate value of the regularity radius. As mentioned above, the common methods might involve application of various approximations algorithms. For various reasons, either numerical or processing ones, we might be forced to adopt different approaches, see discussion and argumentation along with definition of not a priori exponential algorithm by Kolev . For these reasons we also suggest a new method that is not a priori exponential; see Section 5.
2 Special classes of matrices
2.1 Tridiagonal matrices
Considering the class of tridiagonal matrices, we can define the regularity radius as a perturbation of only non-zero elements of tridiagonal matrix. Following the notation from , let be a tridiagonal matrix defined as
We consider only unreduced tridiagonal matrices, i.e., those having for all . Let be a real tridiagonal matrix , and let be a matrix having the same zero structure as . The regularity radius is defined the same way as in the introduction; this time only nonzero elements are perturbed. Bar-On, Codenotti and Leoncini  have shown that regularity of corresponding interval matrix can be determined in linear time. We can make use of this property to compute also the regularity radius.
Let be a non-degenerate tridiagonal matrix and having the same zero structure (i.e., ). The regularity radius is computable in polynomial time with arbitrary precision.
Let and be lower and upper bounds on . We can take and from , for instance. Both of them have polynomial size. Now, we apply standard binary search on the interval given by these bounds. In each of the steps, the actual approximation of the radius in fact determines tridiagonal interval matrix for which we can use Bar-On, Codenotti and Leoncini algorithm  and compute its possible regularity in linear time. Considering the complexity of corresponding steps in binary search, we have the claim. ∎
This proposition provides a polynomial time algorithm to compute regularity radius. More precisely, the polynomial algorithm suggested in the above proof runs in time . We can see that its time complexity depends on the size of the matrix entries. Therefore, it is still an open question whether a strongly polynomial algorithm exists. For a number , define an operator
as a size of its representation depending on chosen model of computation, e.g., the number of bits for Turing machine.
Let be a tridiagonal matrix. Let be a rational number and the size of its corresponding representation. Is the decision problem solvable in time ?
Notice that cannot be computed exactly in general because it can be an irrational number even for a rational input. Consider for example the matrix and the associated weights defined as
Then it is not hard to analytically determine that .
On the other hand, provided has polynomial size, the algorithm described in Proposition 2.1 finds it exactly in polynomial time. Of course, this computational complexity is rather of theoretical nature. For real computation, the corresponding algorithm possibly providing an approximation that can be tuned with algorithm complexity should be designed. This leads us to another open question.
How to effectively design an approximate algorithm estimating the regularity radius for tridiagonal matrices based on Bar-On, Codenotti and Leoncini regularity testing  acquiring good approximation while maintaining feasible complexity?
2.2 P-matrices and totally positive matrices
A matrix is a P-matrix if all its principal minors are positive. The task to check P-matrix property itself is an NP-hard problem . For some subclasses of P-matrices, computation of the regularity radius can be handled in an easier way.
One of the interesting subclasses of P-matrices from viewpoint of regularity are totally positive matrices. A matrix is totally positive if all its minors are positive. Despite the definition, checking this property is a polynomial problem; see Fallat and Johnson .
If is totally positive, then . That is, the signs of the entries of the inverse matrix correspond to the checkerboard structure. Let us denote . Then we have , and therefore the regularity radius of reads
This is indeed a high simplification reducing original computation to a simple formula. Conditions determining classes of P-matrices are of great interest considering the regularity radius. We can think about another prominent subclass – nonsingular M-matrices. This set of matrices is a subset of P-matrices as well as inverse nonnegative matrices. The latter class is again of a great interest, see below.
2.3 Inverse nonnegative matrices
A matrix is inverse nonnegative if . As mentioned above, inverse nonnegative matrices form a superclass of nonsingular M-matrices. For these we know that lower bounds (7) collapse to equality. We can however use inverse nonnegativity of matrices and compute the radius directly as:
The lower bound (7) is attained exactly as long as
This is the case if for some ; see for example . There are several classes of matrices satisfying this property. If is totally positive, then
If is inverse nonnegative (e.g., an M-matrix), then
This is again a high simplification motivating also further search within the class of P-matrices. Along with the mentioned simple cases this class contains also some hard instances. According to Poljak and Rohn [18, 21] a problem to decide for a nonnegative symmetric positive definite relational matrix is NP-hard. This suggest that class of P-matrices exhibits some nice dichotomy from the viewpoint of regularity radius determination complexity. Considering these results we can ask the following question.
Find a subclass of P-matrices for which the determination of the regularity radius is the same as its determination for a general matrix.
2.4 Rank one radius matrix
Suppose that . Then the interval matrix is regular if and only if the scaled interval matrix is regular. Hence we can reduce the computation of a general regularity radius, given by equation (2), to computation of the simpler one, given by equation (1), as follows
For a similar results (without a proof) see Rohn .
2.5 Approximation algorithm by rank one radius matrices
In this section we provide design of an approximate algorithm for regularity radius when is a general radius matrix. Let be SVD decomposition with singular values . Define the rank one matrix . By the construction of SVD we have . In some sense, is the best rank one approximation of , so approximates .
To evaluate quality of this approximation we can use the following procedure. Find maximal and minimal such that . Then by simple application of some basic properties of regularity radius such as property in equation (3) provides
So the quality of approximation is given by the ratio .
Evaluate quality of approximation and compare it to alternative solutions such as .
Is it possible to extend this approach using instead of ? What are the best?
3 Finiteness of the radius
For the regularity radius we have discussed several bounds as well as approximation algorithms. In many cases, especially for bounds, we have assumed that its value is finite. Unfortunately, its value is not necessarily finite as shown by following simple example.
Then any matrix is nonsingular for every since is constant. Therefore .
This situation is of course a very special case. We can, however, check unboundedness of the radius in polynomial time, see the following theorem.
Checking whether is a polynomial problem.
Without loss of generality assume is nonsingular, that is, . Consider the interval matrix . It is regular iff the system
has only trivial solution . Let be an index set of those for which . For such a the th inequality in the above system reads , from which . Denote by the vector space defined by , .
If there are such that and for every , then put . If , then put and update accordingly. Repeat this process until no change happens.
We claim that iff .
“” This is obvious since nonsingularity of implies , so has only trivial solution.
“” For every choose such that . We want to show solvability of the system
because it solution also solves for a sufficiently large multiple of . First, we remove redundant constraints in such that there are no satisfying and . Denote the corresponding index set by . Thus, we want to find , , such that the system
is solvable. Infeasibility of this system occurs only if there are some linearly dependent rows in the constraint matrix. Since the first rows indexed by are linearly independent, the only possibility is that some of the remaining rows are linearly dependent on the rows above them. Thus, to achieve solvability of the system, consider s as unknowns, and the linear dependencies give rise to the system
Notice that the solution set of this system is nontrivial and does not lie in any hyperplane of the form, : Otherwise is a consequence of , which is a contradiction. In this case, however, the system has a solution with no zero entry, which completes the proof. ∎
In Example 3.1 we saw a matrix with infinite regularity radius and having only one nonzero entry. So the natural question arises: How relates the number and positioning of nonzero entries of with finiteness of ?
Obviously, provided . Rohn [23, Thm. 83(ii)] proposed a stronger result (without a proof). For the sake of completeness, we present it here with a proof.
Proposition 3.3 (Rohn, 2012).
If or , then .
If , then
for a sufficiently large . Thus is a solution of system , meaning that there is a singular matrix in .
Case is trivial by matrix transposition. ∎
On the other hand, it may happen that even if has many nonzero entries. Consider the example, where
is the identity matrix of size, and the radius matrix is defined as if and otherwise. Then even though has nonzero entries. We show that this is the maximum posible number of nonzero entries.
The maximal number of nonzero entries of such that is .
Let and such that . We have to show that the number of positive entries in is at most . Consider system . At the beginning of the procedure described in the proof of Theorem 3.2, we have at least one such that , so that . If , there must be such that and for every . Since , there can be at most such s. This holds during the iterations. However, if we already put some index to in the previous iteration, the sub-space is lying in some , and thus the number of novel indices is decreased accordingly. Therefore, the worst case is that at each iteration, we add just one index to , and for every . The total number of positive entries of then reads . ∎
4 Bounding the regularity radius using spectral norm
Due to equivalence of matrix norms , we have for the maximum (Chebyshev) and spectral norms of a matrix
Since the distance of to the nearest singular matrix in the spectral norm is equal to the minimum singular value , we obtain the bounds
Even more, based on the properties of singular values, we propose another upper bound for , which is sometimes attained as the exact value.
A nearest singular matrix to in the maximum norm has the form of for some .
Let be the maximizers of the formula . Then is singular, so there is such that
From this we derive
whence is singular. Moreover, since , we have . ∎
Let and be the left and right singular vectors, respectively, corresponding to . Then is a singular matrix, which is nearest to in the spectral norm. By Lemma 4.1, a nearest matrix to in the maximum norm has the form of for some . Therefore, it is natural to set and and derive the upper bound based on and the proof of Lemma 4.1
5 Not a priori exponential method
In this section, we describe a not a priori exponential method for computing the regularity radius . It is based on the Jansson–Rohn  algorithm for testing regularity of interval matrices. The advantage is that it is not a priori exponential, meaning that it may take exponential number of steps, but often it terminates earlier.
Put and consider the interval linear system of equations . The solution set is defined as
Since the solution set is non-empty, it is either a bounded connected polyhedron (in case of regular ), or each topologically connected component of the solution set is unbounded (in case of singular ); see Jansson . This is the underlying idea of the Jansson–Rohn algorithm for checking regularity of : Start within an orthant, containing some solution and check if the solution set in this orthant is bounded. If yes, explore recursively the neighboring orthants intersecting the solution set. Continue until you process the whole connected component.
Our situation is worse since we have to find the minimal such that is singular. We suggest the following method. Notice that an orthant is characterized by a sign vector and described by .
Start in the non-negative orthant since is a solution.
Find minimal such that the corresponding solution set in this orthant is unbounded.
For each of the neighboring orthants find minimal such that the corresponding solution set intersects the orthant .
If for all neighboring orthants , then we are done and . Otherwise choose the minimal , move to the neighboring orthant and repeat the process.
The set is unbounded if and only if the recession cone has a non-trivial solution , that is, the system
is feasible. This leads us to the optimization problem
Similarly we compute from step 3 by solving the optimization problem
6 Numerical experiments
We have tested this approach numerically using the following settings. We have fixed various matrix sizes and set number of instances for each size to be . For these settings we have generated collections of matrices , where and . Each matrix
is a random matrix of chosen type. For our purposes we have used three types of matrices:
zero centered random matrix,
random orthogonal matrix,
(almost) inverse nonnegative matrix.
These matrices have been generated as follows. To generate zero centered random matrix we generated at first matrix of values generated uniformly from interval . The second matrix denoted as random orthogonal
is simply generated using the previous process of generating zero centered matrices and consequent utilization of matrix Q from its QR decomposition. The last type called here(almost) inverse nonnegative is parametric type of random matrices. Let us at first mention how to generate inverse nonnegative. These are generated as follows. We start with a matrix having its values generated uniformly from interval . Then the inverse nonnegative matrix is generated simply by an inverse of matrix , i.e., . This produces inverse nonnegative matrix. To acquire almost inverse nonnegative we have chosen randomly percent of elements from inverse nonnegative matrix and negate each of them. The elements of a matrix are generated randomly uniformly from interval .
All computations were performed in Matlab 2016b using function fminimax from Optimization Toolbox. Only one thread has been assigned to each computation to avoid effects of automatic parallelism. For each pair of generated matrices and we have estimated values of the regularity radius using either the full search analysis of equation (4) producing thus a numerical estimation of the regularity radius denoted by , or our suggested method of orthant search using the above described exploration of orthants producing thus an estimation . While the full search method might be expected as precise, we are interested in normalizing the difference of these two methods. For these reasons we define the following difference.
This could be computed for any values of and . Since index is introduced for statistical reason, we are more interested in average value across this axis – by we denote an average over .
As mentioned above, we have also recorded time consumption. Although the corresponding conditions were preset in order to maintain comparable situations, we have also decided to include an auxiliary characteristic enabling to evaluate time complexity. This characteristic is represented by the number of visited orthants for which the optimization of problem (14) took place. For the orthant search method we consider matrices of extended sizes to better show the growing number of visiting orthants. This is possible due to reasonable time complexities for larger matrices.
Zero centered matrices
At first, let us explore results of computation for random zero centered matrices as shown in Figure 1. For the full interval of matrix sizes average differences between radii are numerically negligible. The results suggest that our orthant search provides the same estimation compared to the full search method. At the same time its complexity is growing slower.
Figure 2 presents results of second group of matrices, namely random orthogonal matrices. These have shown roughly the same behavior as the above mentioned results for zero centered matrices. This holds also for the number of visited orthants.
(Almost) inverse non-negative matrices
We have also tested this approach for inverse non-negative matrices.
For this class we have already derived an exact formula shown in equation (9) and computation of corresponding radius in this way is meaningless. The reason for testing this class may make sense for testing of methods validity. As expected results of testing these matrices have shown that in this case only one orthant is visited within each computation and thus these computations are very easy and their complexity is low. This, however, suggest that there exist matrices where the number of visited orthants is relatively low. To obtain results for less trivial class we have utilize class of almost inverse nonnegative matrices. Results of testing these matrices are shown in Figure 3.
We can see that most of the properties of the orthant search method is again exhibited. Moreover, due to relative proximity to structure of simple inverse nonnegative matrices, the number of visited orthants is significantly lower compared to previous cases. This indicates that having a nice structure of input matrix, although it might not be theoretically tractable, might represent a significant improvement of time complexity when regularity radius is handled via the orthant search method suggested above. Moreover, due to search character of the method its computation seems to be simply parallelized.
While the problem of computing the regularity radius is NP-hard in general, a common approach to estimate its values might include an adoption of theoretical bounds or utilization of approximation algorithms. This usually means that we try to make use of a specific structure of the corresponding matrix to provide better results or to apply an approximation algorithm if necessary. This work presents results for regularity radius along such process of its analysis. The first series of results is concerned with possible special types of matrices. For a tridiagonal matrix we employ the published algorithm for regularity testing to construct an algorithm to determine polynomially the corresponding regularity radius. The problem is that this complexity depends on values of matrix elements and thus it remains open whether there exists a strongly polynomial algorithm. We also provide several exact formulas for regularity radius when the matrix is of a specific type – namely for totally positive matrices and inverse nonnegative matrices (including M-matrices). Moreover, for rank one radius matrices we are able to transform the computation of the general radius of regularity, see equation (2), to its simpler form, see equation (1). This enables us to use the approximation algorithm by Hartman and Hladík  to acquire any predefined accuracy. Based on last mentioned result we also provide a construction of a new approximation algorithm using rank one approximation of a general radius matrix. Moving to general matrices, we start with a proof that checking whether regularity radius is infinite is a polynomial problem. Moreover we provide sharp bounds on the number of nonzero entries of radius matrix to enable infinite regularity radius. For a general matrix we also provide new bounds based on relationship between the maximum (Chebyshev) norm and the spectral norm. Finally, a new not a priori exponential approximation algorithm based on iterative search through orthants using Oettli-Prager theorem is presented along with numerical results. The presented results suggest that the method has typically significantly lower complexity compared to the full search computation based on equation (4). Moreover for some special types of matrices, here represented as almost inverse nonnegative, the number of visited orthants can be highly reduced.
-  Ilan Bar-On, Bruno Codenotti, and Mauro Leoncini. Checking robust nonsingularity of tridiagonal matrices in linear time. BIT Numerical Mathematics, 36(2):206–220, 1996.
-  J. Demmel. The componentwise distance to the nearest singular matrix. SIAM J. Matrix Anal. Appl., 13(1):10–19, 1992.
-  Shaun M. Fallat and Charles R. Johnson. Totally Nonnegative Matrices. Princeton University Press, Princeton, NJ, 2011.
-  R. W. Freund and F. Jarre. An interior-point method for multifractional programs with convex constraints. J. Optim. Theory Appl., 85(1):125–161, 1995.
-  Michel X. Goemans and David P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM, 42(6):1115–1145, 1995.
M. Grötschel, L. Lovász, and A. Schrijver.
The ellipsoid method and its consequences in combinatorial optimization.Combinatorica, 1(2):169–197, 1981.
-  David Hartman and Milan Hladík. Tight bounds on the radius of nonsingularity. In Marco Nehmeier et al., editor, Scientific Computing, Computer Arithmetic, and Validated Numerics: 16th International Symposium, SCAN 2014, volume 9553 of LNCS, pages 109–115. Springer, 2016.
-  Milan Hladík and Jiří Rohn. Radii of solvability and unsolvability of linear systems. Linear Algebra Appl., 503:120–134, 2016.
-  Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, Cambridge, 1985.
-  Christian Jansson. Calculation of exact bounds for the solution set of linear interval systems. Linear Algebra Appl., 251:321–340, 1997.
-  Christian Jansson and Jiří Rohn. An algorithm for checking regularity of interval matrices. SIAM J. Matrix Anal. Appl., 20(3):756–776, 1999.
-  Lubomir V. Kolev. A method for determining the regularity radius of interval matrices. Reliab. Comput., 16:1–26, 2011.
-  Lubomir V. Kolev. Regularity radius and real eigenvalue range. Appl. Math. Comput., 233:404–412, 2014.
-  Lubomir V. Kolev, Iwona Skalna, and Milan Hladík. New method for determining radius of regularity of parametric interval matrices. in preparation, 2017.
-  R. E. Moore, R. B. Kearfott, and M. J Cloud. Introduction to Interval Analysis. SIAM, 2009.
-  Yu. E. Nesterov and A. S. Nemirovskii. An interior-point method for generalized linear-fractional programming. Math. Program., 69(1B):177–204, 1995.
-  W. Oettli and W. Prager. Compatibility of approximate solution of linear equations with given error bounds for coefficients and right-hand sides. Numer. Math., 6:405–409, 1964.
-  Svatopluk Poljak and Jiří Rohn. Checking robust nonsingularity is NP-hard. Math. Control Signals Syst., 6(1):1–9, 1993.
-  Laleh Ravanbod, Dominikus Noll, and Pierre Apkarian. Branch and bound algorithm with applications to robust stability. Journal of Global Optimization, 67(3):553–579, 2017.
-  Jiří Rohn. System of linear interval equations. Linear Algebra and its Applications, 126:29–78, 1989.
-  Jiří Rohn. Checking properties of interval matrices. Technical Report 686, Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague, 1996. http://uivtx.cs.cas.cz/~rohn/surveys/92.pdf.
-  Jiří Rohn. A handbook of results on interval linear problems. Technical Report 1163, Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague, 2012. http://www.nsc.ru/interval/Library/InteBooks/!handbook.pdf.
-  Jiří Rohn. A manual of results on interval linear problems. Technical Report 1164, Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague, 2012. http://www.cs.cas.cz/web_ics/index.php?page=lib_reports_1101-1200&lang=cs.
-  Siegfried M. Rump. Verification methods for dense and sparse systems of equations. In Jürgen Herzberger, editor, Topics in Validated Computations, Studies in Computational Mathematics, pages 63–136, Amsterdam, 1994. Elsevier. Proceedings of the IMACS-GAMM International Workshop on Validated Computations, University of Oldenburg.
-  Siegfried M. Rump. Bounds for the componentwise distance to the nearest singular matrix. SIAM J. Matrix Anal. Appl., 18(1):83–103, 1997.
-  Siegfried M. Rump. On P-matrices. Linear Algebra and its Applications, 363(Supplement C):237 – 250, 2003.
-  Alexander Schrijver. Theory of Linear and Integer Programming. Repr. Wiley, Chichester, 1998.