# Stability of the linear complementarity problem properties under interval uncertainty

We consider the linear complementarity problem with uncertain data modeled by intervals, representing the range of possible values. Many properties of the linear complementarity problem (such as solvability, uniqueness, convexity, finite number of solutions etc.) are reflected by the properties of the constraint matrix. In order that the problem has desired properties even in the uncertain environment, we have to be able to check them for all possible realizations of interval data. This leads us to the robust properties of interval matrices. In particular, we will discuss S-matrix, Z-matrix, copositivity, semimonotonicity, column sufficiency, principal nondegeneracy, R_0-matrix and R-matrix. We characterize the robust properties and also suggest efficiently recognizable subclasses.

## Authors

• 13 publications
• ### Computing the spectral decomposition of interval matrices and a study on interval matrix power

We present an algorithm for computing a spectral decomposition of an int...
12/11/2019 ∙ by David Hartman, et al. ∙ 0

• ### Positive Semidefiniteness and Positive Definiteness of a Linear Parametric Interval Matrix

We consider a symmetric matrix, the entries of which depend linearly on ...
04/19/2017 ∙ by Milan Hladík, et al. ∙ 0

• ### Soft robust solutions to possibilistic optimization problems

This paper discusses a class of uncertain optimization problems, in whic...
12/03/2019 ∙ by Adam Kasperski, et al. ∙ 0

• ### Confidence intervals in general regression models that utilize uncertain prior information

We consider a general regression model, without a scale parameter. Our a...
09/16/2020 ∙ by Paul Kabaila, et al. ∙ 0

• ### An Overview of Polynomially Computable Characteristics of Special Interval Matrices

It is well known that many problems in interval computation are intracta...
11/23/2017 ∙ by Milan Hladík, et al. ∙ 0

• ### Interval Structure: A Framework for Representing Uncertain Information

In this paper, a unified framework for representing uncertain informatio...
03/13/2013 ∙ by Michael S. K. M. Wong, et al. ∙ 0

• ### Data description and retrieval using periods represented by uncertain time intervals

Time periods are frequently used to specify time in metadata and retriev...
05/11/2019 ∙ by Tatsuki Sekino, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

#### Linear complementarity problem.

The linear complementarity problem (LCP) appears in many optimization and operations research models such as quadratic programming, bimatrix games, or equilibria in specific economies. Its mathematical formulation reads

 y =Az+q,  y,z≥0, (1) yTz =0, (2)

where and . Condition (1) is linear, but the (nonlinear) complementarity condition (2) makes the problem hard. The LCP is called feasible if (1) is feasible, and is called solvable if (1)–(2) is feasible. Basic properties and algorithms for LCP are described, e.g., in the books [4, 20].

#### Interval uncertainty.

Properties of the solution set of LCP relate with properties of matrix . In this paper, we study properties of when its entries are not precisely known, but we have interval ranges covering the exact values. Formally, an interval matrix is a set

 \boldmathA:={A∈Rm×n;A––≤A≤¯¯¯¯A},

where , , are given matrices and the inequality is understood entrywise. The corresponding midpoint and radius matrices are defined as

 Ac:=12(A––+¯¯¯¯A),AΔ:=12(¯¯¯¯A−A––).

The LCP with interval uncertainties was addressed in [1, 18], among others. They investigated the problem of enclosing the solution set of all possible realizations of interval data. Our goal is different, we focus on the interval matrix properties related to the LCP.

#### Problem statement.

Throughout this paper we consider a class of the LCP problems with , where is a given interval matrix. Let be a matrix property. We say that holds strongly for if it holds for each .

Our aim is to characterize strong versions of several fundamental matrix classes appearing in the context of the LCP. If property holds strongly for an interval matrix , then we are sure that is provably valid whatever are the true values of the uncertain entries. Therefore, the property holds in a robust sense for the LCP problem.

#### Notation.

Given a matrix and index sets , denotes the restriction of to the rows indexed by and the columns indexed by . Similarly

denotes the restriction of a vector

to the entries indexed by

. The identity matrix of size

is denoted by , and the spectral radius of a matrix by . The symbol stands for the diagonal matrix with entries and for the vector of ones. The relation between vectors is defined as and .

## 2 Particular matrix classes

In the following sections, we consider important classes of matrices appearing in the context of the linear complementarity problem. We characterize their strong counterparts when entries are interval valued. Other matrix properties were discussed, e.g., in [7, 10, 12, 14, 17].

In particular, we leave aside several kinds of interval matrices that were already studied: -matrices, -matrices and positive (semi-)definite matrices. We review the basic definitions and properties, which we will need later on.

A matrix is an -matrix if for some such that . There are many other equivalent conditions known [23, 16], among which we will use that one stating that is an -matrix if an only if it is a -matrix and . By [2], an interval matrix is strongly an -matrix if and only if is an -matrix and for all .

Analogously, a matrix is an -matrix if for some such that . Equivalently,

is a matrix with nonpositive off-diagonal entries and nonnegative real eigenvalues

[6, 13].

-matrices closely relate to -matrices. A matrix is an -matrix if the so called comparison matrix is an -matrix, where and for . By [21, 22], an interval matrix is strongly an -matrix if and only if is an -matrix, where and for .

Positive definite and positive semidefinite interval matrices were studied, e.g., in [11, 25, 27]. An interval matrix is strongly positive semidefinite if and only if the matrix is positive semidefinite for each . There are some sufficient conditions known, but the problem of checking strong positive semidefiniteness is co-NP-hard in general [17]. Similar results hold for positive definiteness.

A matrix is a -matrix if all principal minors of are positive. -matrix property of interval matrices was addressed, e.g., in [3, 9].

We start with two classes that are simple to characterize both in the real and interval case, and then we discuss the computationally harder classes.

### 2.1 S-matrix

A matrix is called an -matrix if there is such that . The significance of this class is that the LCP is feasible for each if and only if is an S-matrix.

Strong -matrix property of an interval matrix is easy to characterize.

###### Proposition 1.

is strongly an -matrix if and only if system , is feasible.

###### Proof.

If , has a solution , then for each . Therefore, every is an -matrix. ∎

### 2.2 Z-matrix

A matrix is called a -matrix if for each . -matrices emerge in the context of Lemke’s complementary pivot algorithm, because it processes any LCP with a -matrix.

It is easy to see that the strong -matrix property reduces to -matrix property of the upper bound matrix .

###### Proposition 2.

is strongly a -matrix if and only if is a -matrix.

### 2.3 Copositive matrix

A matrix is called copositive if for each . It is strictly copositive if for each . A copositive matrix ensures that the complementary pivot algorithm for solving the LCP works. A strictly copositive matrix in addition implies that the LCP has a solution for each . Checking whether is copositive is a co-NP-hard problem [19].

From the definition immediately follows:

###### Proposition 3.

is strongly (strictly) copositive if and only if is (strictly) copositive.

Matrix is (strictly) copositive if and only if its symmetric counterpart is (strictly) copositive. That is why we can without loss of generality focus on symmetric matrices. In particular, we assume that and are symmetric.

Since checking copositivity is co-NP-hard, it is desirable to inspect some polynomially solvable classes of problems.

###### Proposition 4.

Let be an -matrix. Then

1. is strongly copositive if and only if is an -matrix;

2. is strongly strictly copositive if and only if is an -matrix.

###### Proof.

(1) “If.” If is an -matrix, then it is positive semidefinite [4] and so it is copositive. By Proposition 3, strong copositivity of follows.

“Only if.” Suppose to the contrary that is not an -matrix. Then we can write , where and . For the corresponding Perron vector we have , from which . If , then and so . Similarly, if , then and so . Hence and have the same nonzero entries, whence ; a contradiction.

(2) For strict copositivity we proceed analogously. ∎

###### Corollary 1.

Let . Then

1. is strongly copositive if and only if ;

2. is strongly strictly copositive if and only if .

###### Proof.

Obviously, is an -matrix. Further, is an -matrix if and only if . Similarly for strict copositivity. ∎

### 2.4 Semimonotone matrix

A matrix is called semimonotone (an -matrix) if the LCP has a unique solution for each . Equivalently, for each index set the system

 AI,Ix<0,  x≥0 (3)

is infeasible. By [28], checking whether is semimonotone is a co-NP-hard problem.

From the definition we simply derive:

###### Proposition 5.

is strongly semimonotone if and only if is semimonotone.

The next result shows a class of interval matrices, for which checking strong semimonotonicity can be performed effectively in polynomial time.

###### Proposition 6.

Let be an -matrix. Then is strongly semimonotone if and only if is an -matrix.

###### Proof.

“If.” Suppose to the contrary that there are and such that (3) has a solution . Without loss of generality assume that ; otherwise we restrict to , where . Since is an -matrix, also is an -matrix. That is, we can write it as , where and . However, from (3) we have , from which ; a contradiction.

“Only if.” Suppose to the contrary that is not an -matrix. That is, , where and . Let be the Perron vector corresponding to , so that . Then . Define . Since for each , we get for ; a contradiction. ∎

###### Corollary 2.

Let . Then is strongly semimonotone if and only if .

### 2.5 Principally nondegenerate matrix

A matrix is called principally nondegenerate if all its principal minors are nonzero. In the context of the LCP, such matrix guarantees that the problem has finitely many solutions (including zero) for every .

From the definition, an interval matrix is strongly principally nondegenerate if its principal submatrices are strongly nonsingular (i.e., contain nonsingular matrices only). Below, we state a finite reduction. By the same reasoning as in [9] we can enumerate that the condition requires instances. This is a high number, but justified by two facts: First, checking principally nondegeneracy of a real matrix is co-NP-hard [28]. Second, checking whether an interval matrix is strongly nonsingular matrix is co-NP-hard, too [24].

###### Proposition 7.

is strongly principally nondegenerate if and only if

 det(De−|y|+D|y|AcD|z|)det(De−|y|+D|y|AcD|z|−DyAΔDz)>0 (4)

for each such that .

###### Proof.

By [26], is strongly nonsingular if and only if

 det(Ac)det(Ac−DyAΔDz)>0,  ∀y,z∈{±1}m.

We need to check this condition for each principal submatrix of . We claim it is as stated in (4). Consider the permutation of rows and columns, represented by the permutation matrix , that brings the zeroes of into the last entries. Then the matrix is a block diagonal matrix. The right bottom block is the identity matrix and the left top block is the principal submatrix of indexed by .

Similarly for . ∎

An efficient test can be performed only for specific types of matrices. Since principally nondegenerate matrices are closed under nonzero row or column scaling, the following can directly be extended to an interval matrix such that is an -matrix for some .

###### Proposition 8.

Let be an -matrix. Then is strongly principally nondegenerate if and only if it is an -matrix.

###### Proof.

By Neumaier [22, Prop. 4.1.7], our assumption implies that is strongly nonsingular if and only if is an -matrix. Principal submatrix of an -matrix is again an -matrix, and the same property holds for -matrices. Therefore the above result applies for principal submatrices, too, from which the statement follows. ∎

Under the assumption that is an -matrix we have that is an -matrix if and only if is an -matrix. Therefore we can equivalently test whether is an -matrix in the above proposition.

###### Corollary 3.

Let . Then is strongly principally nondegenerate if and only if .

###### Proposition 9.

Let be positive definite. Then is strongly principally nondegenerate if and only if it strongly positive definite.

###### Proof.

Since is positive definite, we have by [25] that is strongly nonsingular if and only is it is strongly positive definite. Since positive definiteness is preserved to principal submatrices, the statement follows. ∎

Checking strong positive definiteness of is known to be co-NP-hard, but there are various sufficient conditions known; see [25].

### 2.6 Column sufficient matrix

A matrix is column sufficient if for each pair of disjoint index sets , , the system

 (AI,I−AI,J−AJ,IAJ,J)x≨0,  x>0 (5)

is infeasible. Checking this condition is co-NP-hard [28], which justifies necessity of inspecting all index sets . Among other properties, column sufficiency implies that for any the solution set of the LCP is a convex set (including the empty set).

###### Proposition 10.

is strongly column sufficient if and only if system

 (A––I,I−¯AI,J−¯¯¯¯AJ,IA––J,J)x≨0,  x>0 (6)

is infeasible for each admissible .

###### Proof.

If is strongly column sufficient, then (6) must be infeasible, because the matrix there comes from . Conversely, if some is not column sufficient, then (5) has a solution for certain . Since

 ⎛⎝A––I,I−¯¯¯¯AI,J−¯¯¯¯AJ,IA––J,J⎞⎠x∗≤(AI,I−AI,J−AJ,IAJ,J)x∗,

we have that is a solution to (6); a contradiction. ∎

The above result also suggests a reduction to finitely many (namely, ) instances.

###### Proposition 11.

is strongly column sufficient if and only if matrices of the form are column sufficient for each .

###### Proof.

If is strongly column sufficient, then is column sufficient since . Conversely, if is not strongly column sufficient, then (6) has a solution. However, feasibility of (6) implies that is not column sufficient for defined as follows: if and otherwise, because

 (Ass)I,I=A––I,I,  (Ass)J,J=A––J,J,  (Ass)I,J=¯¯¯¯AI,J.\qed

Below, we state a polynomially recognizable class. To this end, recall that a nonnegative matrix is called irreducible if , or equivalently, is block triangular (with at least two blocks) for no permutation matrix .

###### Lemma 1.

Let be irreducible and such that . Then .

###### Proof.

Define . Since , we have

 (In+A)n−1x<(In+A)n−1Ax=A(In+A)n−1x,

from which . By [15], . ∎

###### Proposition 12.

Let be an -matrix and irreducible. Then is strongly column sufficient if and only is an -matrix.

###### Proof.

“If.” Suppose to the contrary that (5) has a solution for some and . Then the same solution solves , , where . Thus we can assume without loss of generality that ; otherwise we set and . In addition, consider the instance, for which has maximal cardinality. Since is an -matrix, it can be expressed as , where and . Then

 (sIm−N)x≨0,  x>0,

from which .

Consider two cases: (1) is irreducible. Then is irreducible and by Lemma 1, we have ; a contradiction.

(2) is reducible. Define and the vector such that and , where is sufficiently small. Consider the system

 (A––I,I–AI,J′A––J′,IA––J′,J′)~x≨0,  ~x>0.

Since , vector solves the first block of inequalities. If matrix contains a nonzero row, say , then solves also the th inequality (since ). Nevertheless, this cannot happen since we could put , which contradicts maximum cardinality of . Thus , which means that and so is reducible; a contradiction.

“Only if.” Suppose to the contrary that is not an -matrix. Denote and split it as follows: , where . Since is not an -matrix, we have . Thus there is such that . Define and . Then and , from which . Therefore (5) is feasible; a contradiction. ∎

Notice that assumption that is irreducible is necessary. As a counterexample, consider

 Ac=(1001),AΔ=(1101).

Then , which is an -matrix, but not column sufficient.

In Proposition 12, the assumption that is an -matrix can be directly extended such that is an -matrix for some . This is because column sufficient matrix is closed under transformation .

###### Corollary 4.

Let and irreducible. Then is strongly column sufficient if and only if .

###### Proposition 13.

Let be positive semidefinite. Then is strongly column sufficient if and only if it is strongly positive semidefinite.

###### Proof.

“If.” This follows from the fact that positive semidefinite matrix is column sufficient: is positive semidefinite if and only if is positive semidefinite, where is arbitrary. We will use it for the setting . Thus

 ~A:=(AI,I−AI,J−AJ,IAJ,J)

is positive semidefinite, too. If is a solution to (5), then ; a contradiction.

“Only if.” Suppose to the contrary that is not strongly positive semidefinite. Thus there is such that is not positive semidefinite. Let be maximal such that is positive semidefinite for each . Obviously, and is singular. Then is singular, too. Let such that and let . If , then , so solves (5) with , and . If , then we substitute and proceed as before. Consider now the remaining case. Without loss of generality write , where and ; we can ignore the zero entries of since in (5) we restrict to the indices corresponding to the nonzero entries only. Denoting , the equation reads

 (DzDsAcDsDz−α∗DzAΔDz)~x∗=0. (7)

Denote . Since , we have

 (DzDsAcDsDz−α∗~AΔ)~x∗≤0,

whence

 (DzDsAcDsDz−~AΔ)~x∗≤0.

If at least one inequality holds strictly in this system, then is a solution to (5) with , and . If it is not the case, then necessarily , whence . However, this is a contradiction since from (7) we have

 (DzDsAcDsDz)~x∗≨0,

which contradicts positive semidefiniteness of . ∎

### 2.7 R0-matrix

A matrix is an -matrix if the LCP with has the only solution . Equivalently, for each index set , the system

 AI,Ix=0,  AJ,Ix≥0,  x>0 (8)

is infeasible, where . Checking -matrix property is co-NP-hard [28]. If is an -matrix, then for any the LCP has a bounded solution set.

###### Proposition 14.

is strongly -matrix if and only if system

 A––I,Ix≤0,  ¯¯¯¯AI,Ix≥0,  A––J,Ix≥0,  x>0 (9)

is infeasible for each admissible .

###### Proof.

is not strongly an -matrix if and only if there are and such that (8) is feasible. It is known [5, 8] that (8) is feasible for some if and only if (9) is feasible, from which the statement follows. ∎

Despite intractability in the general case, we can formulate a polynomial time recognizable sub-class.

###### Proposition 15.

Let be an -matrix. Then is strongly an -matrix if and only if is strongly an -matrix.

###### Proof.

“If.” This follows from nonsingularity of -matrices and the fact that a principal submatrix of an H-matrix is an H-matrix.

“Only if.” Suppose to the contrary that is not strongly an -matrix. Thus there is that is not an -matrix. Hence is not an -matrix and due to the assumption it belongs to . Since is an -matrix and from continuity reasons, contains an -matrix that is not an -matrix. Thus we can write , where and . Let be the Perron vector corresponding to , that is, , whence . Put , so , and . Therefore solves (8). ∎

###### Corollary 5.

Let . Then is strongly an -matrix if and only if .

### 2.8 R-matrix

A matrix is an -matrix (regular) if for each index set , the system

 AI,Ix+et=0,  AJ,Ix+et≥0,  x>0, t≥0 (10)

is infeasible w.r.t. variables and , where . Regularity of ensures that for any the LCP has a solution.

###### Proposition 16.

is strongly an -matrix if and only if system

 A––I,Ix+et≤0,  ¯¯¯¯AI,Ix+et≥0,  A––J,Ix+et≥0,  x>0

is infeasible for each admissible .

###### Proof.

Similar to the proof of Proposition 14. ∎

###### Proposition 17.

Let be -matrix. Then is strongly an -matrix if and only if is strongly an -matrix.

###### Proof.

“If.” Suppose to the contrary that (10) has a solution for some and . If , then we are done based on Proposition 15. So let and without loss of generality we can assume that . Thus , whence in view of . Since is an -matrix and is strongly an -matrix, it follows that is an -matrix. Then is an -matrix, too, and so it has a nonnegative inverse, implying ; a contradiction.

“Only if.” We proceed similarly to the proof of Proposition 15, where a solution to (8) was found. Now, put and solves (10). ∎

###### Corollary 6.

Let . Then is strongly an -matrix if and only if .

## 3 Examples

###### Example 1.

Let

 Ac=⎛⎜⎝0−1220−2−110⎞⎟⎠.

This matrix is semimonotone, column sufficient, -matrix and -matrix, but not principally nondegenerate. Suppose that the entries of the matrix are subject to uncertainties and the displayed values are known with accuracy. Thus we come across to an interval matrix , whose midpoint is the displayed matrix and radius is

 AΔ=110|Ac|=⎛⎜⎝00.10.20.200.20.10.10⎞⎟⎠.

Based on calculations performed in MATLAB R2017b we checked that is strongly semimonotone, column sufficient, -matrix and -matrix. That is, the properties that are valid for remain valid for whatever is the true value in the uncertainty set. This means, among others, that the solution set is nonempty, bounded and convex for any .

In contrast, if we increase the uncertainty level to , then none of the above properties holds strongly. Column sufficiency, -matrix and -matrix properties fail for the value of and .

###### Example 2.

 min xTCx+dTx  subject to  Bx≤b, x≥0.

Optimality conditions for this problem have the form of a linear complementarity problem

 y=Az+q,  yTz=0,  y,z≥0,

where

 A:=(0−BBT2C),  q:=(bd),  z:=(ux).

For concreteness, consider the problem

 min 10x21+8x1x2+5x22+x1+x2 subject to 2x1−x2≤10, −3x1+x2≤9, x≥0,

so we have

 A=⎛⎜ ⎜ ⎜⎝00−21003−12−3208−11810⎞⎟ ⎟ ⎟⎠.

Calculations showed that is semimonotone, column sufficient, -matrix and -matrix, but not principally nondegenerate. Since , the LCP problem has a unique solution. However, even if was not positive, the other properties would imply nonemptiness, boundedness and convexity of the solution set.

As in the previous example, we consider uncertainty in terms of maximal percentage variations of the matrix entries. Table 1 shows the results for various degrees of uncertainty, which is represented by the radius of interval matrix , or the particular radius matrices and . The first four settings correspond to the case, where uncertainty affects the cost matrix only, and the technological matrix remains fixed. Naturally, the higher degree of uncertainty, the less properties hold. However, even independent and simultaneous variations of the costs do not influence the properties we discussed.

## 4 Conclusion

We analysed important classes of matrices, which guarantee that the linear complementarity problem has convenient properties related to the structure of the solution set. We characterized the matrix properties in the situation, where the input coefficients have the form of compact intervals. As a consequence, we obtained robust properties for the linear complementarity problem: whatever are the true values from the interval data, we are sure that the corresponding property is satisfied.

Since many problems are hard to check even in the real case, it is desirable to investigate some easy-to-recognize cases. We proposed several such cases, but it is still a challenging problem to explore new ones.

## 5 Acknowledgements

The author was supported by the Czech Science Foundation Grant P403-18-04735S.

## References

• [1] G. Alefeld and U. Schäfer. Iterative methods for linear complementarity problems with interval data. Comput., 70(3):235–259, Jun 2003.
• [2] W. Barth and E. Nuding. Optimale Lösung von Intervallgleichungssystemen. Comput., 12:117–125, 1974.
• [3] S. Białas and J. Garloff. Intervals of P-matrices and related matrices. Linear Algebra Appl., 58:33–41, 1984.
• [4] R. W. Cottle, J.-S. Pang, and R. E. Stone. The Linear Complementarity Problem. SIAM, 2009.
• [5] M. Fiedler, J. Nedoma, J. Ramík, J. Rohn, and K. Zimmermann. Linear Optimization Problems with Inexact Data. Springer, New York, 2006.
• [6] M. Fiedler and V. Pták. On matrices with non-positive off-diagonal elements and positive principal minors. Czech. Math. J., 12(3):382–400, 1962.
• [7] J. Garloff, M. Adm, and J. Titi. A survey of classes of matrices possessing the interval property and related properties. Reliab. Comput., 22:1–10, 2016.
• [8] M. Hladík. Weak and strong solvability of interval linear systems of equations and inequalities. Linear Algebra Appl., 438(11):4156–4165, 2013.
• [9] M. Hladík. On relation between P-matrices and regularity of interval matrices. In N. Bebiano, editor, Applied and Computational Matrix Analysis, volume 192 of Springer Proceedings in Mathematics & Statistics, pages 27–35. Springer, 2017.
• [10] M. Hladík. An overview of polynomially computable characteristics of special interval matrices. preprint arXiv: 1711.08732, http://arxiv.org/abs/1711.08732, 2017.
• [11] M. Hladík. Positive semidefiniteness and positive definiteness of a linear parametric interval matrix. In M. Ceberio and V. Kreinovich, editors, Constraint Programming and Decision Making: Theory and Applications, volume 100 of Studies in Systems, Decision and Control, pages 77–88. Springer, Cham, 2018.
• [12] M. Hladík. Tolerances, robustness and parametrization of matrix properties related to optimization problems. Optim., 68(2-3):667–690, 2019.
• [13] L. Hogben, editor. Handbook of Linear Algebra. Chapman & Hall/CRC, 2007.
• [14] J. Horáček, M. Hladík, and M. Černý. Interval linear algebra and computational complexity. In N. Bebiano, editor, Applied and Computational Matrix Analysis, volume 192 of Springer Proceedings in Mathematics & Statistics, pages 37–66. Springer, 2017.
• [15] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, Cambridge, 1985.
• [16] R. A. Horn and C. R. Johnson. Topics in Matrix Analysis. Cambridge University Press, 1991.
• [17] V. Kreinovich, A. Lakeyev, J. Rohn, and P. Kahl. Computational Complexity and Feasibility of Data Processing and Interval Computations. Kluwer, Dordrecht, 1998.
• [18] H.-q. Ma, J.-p. Xu, and N.-j. Huang. An iterative method for a system of linear complementarity problems with perturbations and interval data. Appl. Math. Comput., 215(1):175–184, 2009.
• [19] K. G. Murty and S. N. Kabadi. Some NP-complete problems in quadratic and nonlinear programming. Math. Program., 39(2):117–129, 1987.
• [20] K. G. Murty and F.-T. Yu. Linear Complementarity, Linear and Nonlinear Programming. Internet edition, 1997.
• [21] A. Neumaier. New techniques for the analysis of linear interval equations. Linear Algebra Appl., 58:273–325, 1984.
• [22] A. Neumaier. Interval Methods for Systems of Equations. Cambridge University Press, Cambridge, 1990.
• [23] R. Plemmons. M-matrix characterizations.I—nonsingular M-matrices. Linear Algebra Appl., 18(2):175–188, 1977.
• [24] S. Poljak and J. Rohn. Checking robust nonsingularity is NP-hard. Math. Control Signals Syst., 6(1):1–9, 1993.
• [25] J. Rohn. Positive definiteness and stability of interval matrices. SIAM J. Matrix Anal. Appl., 15(1):175–184, 1994.
• [26] J. Rohn. Forty necessary and sufficient conditions for regularity of interval matrices: A survey. Electron. J. Linear Algebra, 18:500–512, 2009.
• [27] J. Rohn. A manual of results on interval linear problems. Technical Report 1164, Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague, 2012.
• [28] P. Tseng. Co-NP-completeness of some matrix classification problems. Math. Program., 88(1):183–192, June 2000.