DeepAI
Log In Sign Up

Perturbation bounds for the matrix equation X + A^* X^-1 A = Q

Consider the matrix equation X+ A^*X^-1A=Q, where Q is an n × n Hermitian positive definite matrix, A is an mn× n matrix, and X is the m× m block diagonal matrix with X on its diagonal. In this paper, a perturbation bound for the maximal positive definite solution X_L is obtained. Moreover, in case of X_L^-1A> 1 a modification of the main result is derived. The theoretical results are illustrated by numerical examples.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

06/23/2021

A Note On Symmetric Positive Definite Preconditioners for Multiple Saddle-Point Systems

We consider symmetric positive definite preconditioners for multiple sad...
04/12/2022

On the banded Toeplitz structured distance to symmetric positive semidefiniteness

This paper is concerned with the determination of a close real banded po...
03/14/2018

Block Diagonally Dominant Positive Definite Sub-optimal Filters and Smoothers

We examine stochastic dynamical systems where the transition matrix, Φ, ...
09/26/2014

Order-invariant prior specification in Bayesian factor analysis

In (exploratory) factor analysis, the loading matrix is identified only ...
12/22/2020

COVID-19: Optimal Design of Serosurveys for Disease Burden Estimation

We provide a methodology by which an epidemiologist may arrive at an opt...

1 Introduction

In this paper we study for perturbation bounds the matrix equation

(1)

where is an Hermitian positive definite matrix, is an matrix, is the block diagonal matrix defined by , in which is matrix, and is the conjugate transpose of a matrix .

Eq. (1) can be write as

(2)

where are matrices, and

Moreover, Eq. (1) can be reduced to

(3)

by multiplying both hand side of (1) with the matrix , where

is the identity matrix. Thus, Eq. (

1) is solvable if and only if Eq. (3) is solvable.

The maximal positive definite solution of Eq. (1) with have many applications in ladder networks, control theory, dynamic programming, stochastic filtering, etc., see for instance [1, 2, 3] and the references therein. Since 1990, the Eq. (1) with has been extensively studied, and the research results mainly concentrated on the following: sufficient and necessary conditions for the existence of a positive definite solution [1, 2, 4]; numerical methods for computing the positive definite solution [3, 5, 6, 7]; properties of the positive definite solution [3, 4]; and perturbation bounds for the positive definite solution [8, 9, 10, 11, 12].

Eq. (3) is introduced by Long et al. [13] for and by He and Long [14] for generale case. Later Eqs. (1) and (3) are investigated by many authors [15, 16, 17, 18, 19, 20, 21, 22]. Bini et al. [23] have considered the equation arising in Tree-Like stochastic processes.

Long et al. [13] have given some necessary and sufficient conditions for the existence of a positive definite solution of Eq. (3) in case of , and proposed basic fixed point iteration and its inversion free variant for finding the largest positive definite solution to that equation. Vaezzadeh et al. [18] have considered inversion free iterative methods for (1) when , also. Hasanov and Ali [19] improved the results of Vaezzadeh et al. (in [18]) and gave convergence rate of the considered methods. Popchev et al. [16, 17] have made a perturbation analysis of (3) for .

He and Long [14] have proposed a basic fixed point iteration and its inversion free variant method for finding the maximal positive definite solution to Eq. (3). Hasanov and Hakkaev in [20] considered the Newton’s method for Eq. (1) and in [21] gave convergence rate of the basic fixed poind iteration and its two inverse free variants, and considered a modification of Newton’s method with linear rate of convergence. Duan et al. [15] have derived a perturbation bound for the maximal positive definite solution of Eq. (3) based on the matrix differentiation. Hasanov and Borisova [22] obtained two perturbed bounds, which do not require the maximal solution to the perturbed or the unperturbed equations. In addition, many authors have investigated similar or more general nonlinear matrix equations [24, 25], [26, 27], [28, 29], [30], [31], and [32, 33].

Motivated by the work in the above papers, we continue to study Eq. (1). Here, we derive new perturbation bounds for the maximal solution to Eq. (1) by generalization of the results in [11, 12]. Our bounds are much less expensive for computing because they use very simple formulas.

The rest of the paper is organized as follows. In Section 2 we give some preliminaries for the perturbation analysis. The main result and some known perturbation bounds are presented in Section 3. Three illustrative examples are provided in Section 4. The paper closes with concluding remarks in Section 5.

Throughout this paper, we denote by the set of all Hermitian matrices. The notation means that is positive definite (semidefinite). If (or ) we write (or ). (or ) stands for the identity matrix of order . A Hermitian solution we call maximal one if for an arbitrary Hermitian solution . The symbols , , , and stand the spectral radius, the spectral norm, the Frobenius norm, and any unitary invariant matrix norm, respectively. For complex matrix and a matrix , is a Kronecker product. Finally, for a matrix , we denote with the block diagonal matrix with on its diagonal, i.e. .

2 Statement of the problem and preliminaries

It is proved in [14] that if Eq. (3) has a positive definite solution, then it has a maximal Hermitian solution . Moreover, if , then Eq. (3) with has maximal positive definite solution , , and it’s an unique solution with these properties. These results are valid for Eq. (1) also, i.e., if Eq. (1) has a positive definite solution, then it has a maximal solution . If , then Eq. (1) has maximal positive definite solution , , and it’s a unique solution with these properties. Moreover, these results have been generalized to equation by Yin et al. in [31].

Now, we show that the condition for existing of maximal positive definite solution , for which can be replaced with .

Lemma 2.1.

If , then Eq. (3) has a maximal solution and . Moreover, for any other solution .

Proof. For , we define a set of matrices as follows

We consider a map . Thus, all the solutions of Eq. (3) are fixed points of . The map is continuous on . We prove that .

Let , then

Therefore, and according to Schauder’s fixed point theorem [35] there exists a matrix such that , i.e., is a solution of Eq. (3). It is obviously that the maximal solution . Now, we prove that is a unique solution in .

Let and are two solution of Eq. (3) in . We have

Since , then .

Remark 2.2.

By Lemma 2.1 we have, if , then Eq. (1) has a maximal solution and . Moreover, is a unique solution in .

Hasanov and Hakkaev in [20] have obtained

(4)

Moreover, we have (see [25])

(5)
Lemma 2.3.

[34] Let , , be matrices, and . Then

  1. if , then the equation has a unique solution , and , when ;

  2. if there is some such that is positive definite, then .

Lemma 2.4.

Let be a positive definite solution of Eq. (1) with . If

(6)

then , i.e., the maximal solution is a unique positive definite solution which satisfy the condition (6).

Proof. Let be a positive definite solution of Eq. (1) which satisfy the condition (6) and be the maximal solution. Since and , we have

Thus,

which implies that is a solution of the equation , where

By Lemma 2.4 (a) we have that . But, is the maximal solution, i.e. . Hence, .

Remark 2.5.

We have following hypothesis: the maximal solution is a unique positive definite solution of Eq. (1) which satisfy the condition (4).

Lemma 2.6.

Let , be a positive definite solution of Eq. (1). If there is a positive definite matrix such that , then is a maximal solution, i.e., .

Proof. Let , be a positive definite solution of Eq. (1) and is a positive definite matrix such that . Then

Therefore, is a positive definite solution of the equation

(7)

with and .

Since

by Lemma 2.4 and (5), it follows that is a maximal solution of Eq. (7). Let be a maximal solution of Eq. (1), i.e. . Then is a positive definite solution of Eq. (7) and , i.e., . Hence, .

Consider the perturbed equation

(8)

where , . The matrices and , () are small perturbations in the matrix coefficients and in Eq. (1), such that .

We suppose that Eq. (1) has a maximal positive definite solution . The main question is: how much are the perturbations and in the coefficient matrices and , respectively such that Eq. (8) has a maximal positive definite solution ? The second question is: how much is the perturbation , when we have small perturbations and in and ?

3 Perturbation bounds

The questions in the preview section for Eq. (1) in case of have been investigated by several authors [8, 9, 10, 11, 12]. Hasanov and Ivanov in [11] have obtained the following result.

Theorem 3.1.

[11, Theorem 2.1] Let be coefficient matrices for equations and . Let

where is the maximal positive definite solution of the equation . If

then , the perturbed matrix equation has the maximal positive definite solution , and

Moreover, in [11] has been obtained similar result for equation , which was generalized for equation by Yin and Fang [24].

Now, we derive new perturbation bounds for the maximal solution to Eq. (1) by generalization of Theorem 3.1 and its modification in [12]. Firstly, we define for an matrix and a unitary invariant norm . Note that, the values of in cases of the spectral norm and the Frobenius norm , are and , respectively.

Theorem 3.2.

Let be coefficient matrices for Eqs. (1) and (8). Let

where and is the maximal positive definite solution of Eq. (1). If

(9)

then , the perturbed equation (8) has a maximal positive definite solution , and

(10)

Proof. Let be an arbitrary positive definite solution of Eq. (8). Subtracting (1) from (8) gives

where . Using the equalities

we receive

(11)

Consider a map defined by following way:

Using the inequalities in (9), we have

(12)

which implies that

The square equation

(13)

has two positive real roots with the smaller one

We define

For each we have

(14)

Thus is a nonsingular matrix and

(15)

According to definition for , for each we obtain

where the last inequality is due to the fact that is a solution of the square equation (13).

Thus for every , which means that . Moreover, is a continuous mapping on . According to Schauder’s fixed point theorem [35] there exists a such that . Hence there exists a solution of Eq. (11) for which

Let

(16)

Since is a solution of Eq. (1) and is a solution of Eq. (11), then is a Hermitian solution of the perturbed equation (8).

First, we prove that is a positive definite solution, and second we prove that , i.e, is the maximal positive definite solution of Eq. (8).

Since is a positive definite matrix, then there exists a positive definite matrix square root of . From (16) we receive

Since

then . Thus, is a positive definite solution of Eq. (8). We have to prove that .

Consider . By (12), (14), and (15), we have

Thus, from (5) and Lemma 2.4 (or Lemma 2.6 with ) it follows that is the maximal positive definite solution of Eq. (8), i.e., and .

Note that and in some cases of Eq. (1) the coefficients and have not satisfied the condition .

Example 3.3.

Consider Eq. (1) with

where

is the maximal solution.

For Example 3.3 we have . Hence, the bound in Theorem 3.2 is not applicable. But , where .

Remark 3.4.

According to Remark 2.2, from it follows

In case of Example 3.3, .

Applying the technique developed in [12, 25], we obtain the following result.

Theorem 3.5.

Let be coefficient matrices for Eqs. (1) and (8). Let

where , is the maximal positive definite solution of Eq. (1), and is a positive definite matrix. If   and

then and

(17)

Proof. The proof is similar to the proof of Theorem 3.2 by using technique in [12, Theorem 2.4] and [25, Theorem 2]. Moreover, we use Lemma 2.6 for proving that is a maximal solution of the perturbed equation (8).

Now, we describe some known perturbation bounds.

Xu in [8] have obtained an elegant bound in case of , which does not require the solution to the perturbed or the unperturbed equations. This bound has been generalized in case of in [22] and for in [15].

Theorem 3.6.

[22, Theorem 3] Let