Considering the following two types of the absolute value equations (AVEs)
where and ,
denotes the componentwise absolute value of the vector. The AVEs (1) and (2), respectively, were introduced in  by Rohn and  by Wu. Clearly, when in (1) and (2), where
denotes the identity matrix, the AVEs (1) and (2) reduce to the standard absolute value equations
which was considered in  by Mangasarian and Meyer.
) have excited much interest since they often occur in many significant mathematical programming problems, including linear programs, quadratic programs, bimatrix game, linear complementarity problem (LCP), see[3, 5, 4, 6] and references therein. For instance, the AVEs (1) is equal to the LCP of determining a vector such that
Although the AVEs in  is a NP-hard problem, so far, a large number of theoretical results, numerical methods and applications have been extensively excavated. For instance, among the theoretical results, in spite of determining the existence of a solution to the AVEs in [5, 20] is NP-hard, and checking whether the AVEs has unique or multiple solutions in  is also NP-complete, there still exist some very important conclusions, in particular, some sufficient and necessary conditions for ensuring the existence and uniqueness of the solutions of the AVEs (1), (2) and (3) were established for any , see [7, 8, 2, 9].
Likewise, solving the AVEs in  is NP-hard as well. It may be due to the fact that the AVEs contains a non-linear and non-differential absolute value operator. Even so, some efficient numerical methods have been developed, such as the generalized Newton method , Newton-based matrix splitting method , the exact and inexact Douglas-Rachford splitting method , the Picard-HSS method , the sign accord method , the concave minimization method , the Levenberg-Marquardt method , and so on.
As an important application of the AVEs, for all we know, the AVEs was first viewed as a very effective tool to gain the numerical solution of the LCP in , called the modulus method. At present, this numerical method has achieved rapid development and its many various versions were proposed, see [18, 19] and references therein. Since the modulus method has the superiorities of simple construction and quick convergence behavior, it is often regarded as a top-priority method for solving the large-scale and sparse complementarity problem (CP).
In addition to the above aspects about the AVEs, another very important problem is the sensitivity and stability analysis of the AVEs, i.e., how the solution variation is when the data is perturbed. More specifically, when are the perturbation terms of in (1) and (2), respectively, how do we characterize the change in the solution of the perturbed AVEs. With respect to this regard, to our knowledge, the perturbation analysis of the AVEs (1) and (2) has not been discussed. In addition, for the error bound of AVEs, under the assumption of strongly monotone property, a global projection-type error bound was provided in . Obviously, this kind of conditional projection-type error bound frequently has great limitation. Therefore, based on these considerations, in this paper, we in-depth discuss the error bounds and the perturbation bounds of the AVEs. Firstly, by introducing a class of the absolute value functions, the framework of error bounds for the AVEs are presented without any constraints. Without limiting the matrix type, some computable estimates for their upper bounds are given. These bounds are sharper than the existing bounds in  under certain conditions. Secondly, we establish the framework of perturbation bounds for the AVEs and present some computable upper bounds. It is pointed out that when the nonlinear term in (1) is vanished, the presented perturbation bounds reduce to the classical perturbation bounds for the linear systems , including Theorem 1.3 in numerical linear algebra textbooks  and Theorem 2.1 . Thirdly, as another aspect of applications, by making use of the absolute value equations, we convert the HLCP to the equal certain absolute value equations, obtain the framework of error bounds and perturbation bounds for the HLCP, and gain some computable upper bounds without limiting the matrix type. In particular, two new equal error bounds for the LCP are exploited, concomitantly, three new computable upper bounds are obtained and sharper than that in  for the system matrix being an -matrix under proper conditions. Further, without the conditional conditions, we display a new framework of perturbation bound of the LCP and obtain three new computable upper bounds advantage over that in  for the system matrix being a symmetric positive definite matrix and an -matrix. Fourthly, a new approach for some existing perturbation bounds in  for the LCP is provided as well. Of course, finally, to show the efficiency of some proposed bounds, some numerical examples for the AVEs from the LCP are investigated.
The rest of the article is organized as follows. Section 2 provides the framework of error bounds for the AVEs by introducing a class of absolute value functions. In Section 3, some perturbation bounds for the AVEs are provided. In Section 4, the frameworks of error bounds and perturbation bounds for the HLCP are presented by using the AVEs. In Section 5, some numerical examples for the AVEs from the LCP are given to show the feasibility of some perturbation bounds. Finally, in Section 6, we end up with this paper with some conclusions.
Let and . Then we denote . is called an -matrix if and () for ; an -matrix if its comparison matrix (i.e., for ) is an -matrix; an -matrix if is an -matrix with for ; a -matrix if all principal minors of are positive. Let , and
denote the spectral radius, the smallest singular value and the largest singular value of matrix, respectively. For two vectors, by and we denote , and . The norm means -norm, i.e., with .
2 Error bound
In this section, without further illustration, we always assume that the matrix or is nonsingular for any diagonal matrix with such that the AVEs (1) or (2) has the unique solution, respectively. Under this premise, we can give the framework of error bounds on the distance between the approximate solution and the exact solution of the AVEs (1) and (2).
2.1 Framework of error bounds for AVEs
In this subsection, the framework of error bounds for the AVEs is obtained. To achieve our goal, the following absolute value function is introduced, see Lemma 2.1. Let be any two vectors in . Then there exist such that
Its proof is straightforward, which is omitted.
where with , which promptly results in the error bounds for the AVEs (1), see Theorem 2.2.
Let be the unique solution of AVEs (1). Then for any and with ,
By using the same technique for the AVEs (2), we have
Let be the unique solution of AVEs (2). Then for any and with ,
When in Theorem 2.2 or Theorem 2.3, the error bounds for the AVEs (3) can be obtained, see Corollary 2.1.
Let be the unique solution of AVEs (3). Then for any and with ,
Now, we show that the error bounds in Corollary 2.1 are sharper than that in Theorem 2.4. Here, we consider for Corollary 2.1 and Theorem 2.4. Let the assumptions of Corollary 2.1 and Theorem 2.4 be satisfied. Then
Firstly, we prove (10). Since is a convex polyhedron, its maximum value of is obtained at the vertex of , i.e.,
from which we prove (10).
Next, we prove (11). From , we know . Further,
By Banach Perturbation Lemma in , clearly,
By the simple computation, we have
we only show that
In fact, the inequality (12) holds because
This proves (11).
From the proof of Theorem 2.5, we find some interested results for the matrix norm, see Proposition 2.1.
The following statements hold:
For and in ,
Let and in . Then for ,
2.2 Estimations of , and
In the Section 2.1, we have given some error bounds for the AVEs (1), (2) and (3) in Theorem 2.2, Theorem 2.3, and Corollary 2.1, respectively. From the proof of Theorem 2.5, it is not difficult to find that
However, in general, it is difficult to compute quantities , and because they contain any with . To overcome this disadvantage, in this subsection, we explore some computable estimations for , and .
In the following, we focus on estimating the value of . For , its process is completely analogical, with regard to just for their special case. To present the reasonable estimations for , here we consider three aspects: (1) ; (2) ; (3) . For these three cases, the AVEs (1) for any has a unique solution, see Theorem 2 in  and Theorem 2.1 in , Corollary 3.2 in . Similarly, for , we display three aspects: (1) ; (2) ; (3) . For these three cases, the AVEs (2) for any has a unique solution as well, see Corollary 3.3 in  and Lemma 2.2 in , Corollary 3.2 in .
2.2.1 Case I
Assume that matrices and in (1) satisfy
We can present a reasonable estimation for , see Theorem 2.6.
Let in (1). Then
Since , by Theorem 8.1.18 of , we get
the desired bound (13) can be gained.
Similar to the proof of Theorem 2.6, for , we have Let in (2). Then
Needless to say, for , we have
2.2.2 Case II
This completes the proof for Theorem 2.8.
For , we have the same as the result in Theorem 2.9.
Let in (2). Then
It is easy to see that the upper bound in Corollary 2.3 is still sharper than that in Theorem 2.4, i.e.,
which is equal to
In fact, by using Proposition 2.1, we have
The conditions in Theorems 2.6 and 2.8 are not included each other, e.g., take
By the simple computation,
This shows that matrices and satisfy the condition in Theorem 2.8, do not satisfy the condition in Theorem 2.6. Now we take
By the simple computation,
This shows that matrices and satisfy the condition in Theorem 2.6, do not satisfy the condition in Theorem 2.8.
2.2.3 Case III
Here, we consider this case that is nonsingular and in (1). Based on this, for we have Theorem 2.11.
Let be nonsingular and in (1). Then
From Corollary 3.2 in , is nonsingular under the assumptions. So, we have
Noting that is equal to and
Making use of Banach perturbation lemma in  leads to
Therefore, the proof of Theorem 2.11 is completed.
For , we have
Let be nonsingular and in (2). Then
Comparing Theorem 2.8 with Theorem 2.11, it is easy to find that
Whereas, it does not show that Theorem 2.11 is weaker than Theorem 2.8 because Theorem 2.11 asks for being nonsingular. Besides, we need to point out that Theorem 2.11 sometimes performs better than Theorem 2.8, vice versa. To illustrate this, we take
Obviously, is nonsingular,
This shows that the conditions in Theorem 2.8 with Theorem 2.11 are satisfied. From Theorem 2.8 and Theorem 2.11, we have
from which shows that the upper bound in Theorem 2.11 is sharper than that in Theorem 2.8. Now, we take
Likewise, is nonsingular,
This implies that the conditions in Theorem 2.8 with Theorem 2.11 are satisfied as well. From Theorem 2.8 and Theorem 2.11, we have
which implies that the upper bound in Theorem 2.8 is sharper than that in Theorem 2.11.
3 Perturbation bound
In this section, we focus on the perturbation analysis of AVEs when and are perturbed. For instance, for the AVEs (1), when , and are the perturbation terms of , and , respectively, how do we characterize the change in the solution of the following perturbed AVEs (16)
For the AVEs (1), firstly, we consider the following special case
which is equal to
where with . Making use of the norm for both sides of (18), noting that
for any with , we have
Moreover, from the AVEs (1) with , it is easy to check that
from which we immediately get
Based on the assumptions, the AVEs (1) is equal to
Noting that for from Theorem 2.6, together with Theorem 3.4, Theorem 3.6 is obtained and its advantage successfully avoids any diagonal matrix with .