1 Introduction
Let denotes the linear space of all real symmetric matrices, and
is the cone of symmetric positive semidefinite, and symmetric positive definite matrices respectively.
The semidefinite linear complementarity problem (SDLCP)
Find a pair of matrices that satisfies the following conditions
(1) 
Where
is a given linear transformation and
.Interior point methods (IPMS) have been known for several decades, Since the invention of interior point methods by Karmarker, In 1984 [7], whith an important contribution was made by Nestrov and Todd [17]. This methods considered the powerful tools to solve linear optimization (LO) and can be extended to more general cases such as complementarity problem (CP ), semidefinite optimization(SDO) and semidefinite linear complementarity problem (SDLCP).
The semidefinite linear complementarity problem (SDLCP) can be also viewed as a generalization of the standard linear complementarity problem (LCP) and included the geometric monotone semidefinite linear complementerity introduced by [9], so it became the object of many studies of research these last years and have important applications in mathematical programming and various areas of engineering and scientific fields (see [11],[16]).
Because their polynomial complexity and their simulation efficiency, primaldual following path are the most attractive methods among interior point to solve a large wide of optimization problems [12], [14], [18], [19]. These methods are based on the kernel functions for determining new search directions and new proximity functions for analyzing the complexity of these algorithms, thus we have shown the important role of the kernel function in generating a new design of primaldual interior point algorithm.
Also these methods are introduced by Bai et al [2], and Elghami [4] for (LO) and (SDO) and extended by many authors for different problems in mathematical programming [1], [3], [5], [8], [10].
The polynomial complexity of large update primaldual algorithms is improved in contrast with the classical complexity given by logarithmic barrier functions by using this new form.
A kernel function is an univariate strictly convex function which is defined for all positive real and is minimal at , whereas the minimal value equals . In the other words is a kernel function when it is twice differentiable and satisfies the following conditions
We can describe by its second derivative, as follows
(2) 
This function may be extended to a scaled barrier function defined from to by where is a symmetric positive definite matrix.
In this paper, we establish the polynomial complexity for (SDLCP) by introducing the following new kernel function
(3) 
The goal of this paper is to investigate such a new kernel function and the corresponding barrier function and show that our largeupdate primaldual algorithm has favourable complexity bound in terms of elegant analytic properties of this kernel function. We show that the (SDLCP) is generalization of (SDO), we loose the orthogonality of the search direction matrices. Therefore, the analysis of search direction and step size is a little different from (SDO) case, this will be studied in detail later.
The paper is organized as follows. First in Sec. 2, we present the generic primaldual algorithm, based on NestrovTodd direction. and the new kernel function and its growth properties for (SDLCP) are presented in Sect. 3, in Sect. 4, we derive the complexity results for the algorithm (an estimation of the step size and its default value, the worst case iteration complexity). In Sec. 5, some numerical results are provided. Finally, a conclusion in Sec. 6.
Throughout the paper we use the following notation and we review some known facts about matrices and matrix functions which will be used in the analysis of the algorithm.
The expression ( ) means that ( ). The trace of matrix is denoted by The Frobenius norm of a matrix is defined by For any , ,
, denote its eigenvalues.
denotes the symmetric square root, for any, . The identity matrix of order
is denoted by. The diagonal with the vector
is given by we denote by the vector of eigenvalues of , arranged in nonincreasing order, that is For two real valued functions , if and if for some positive constants , and .Theorem 1.
(Spectral theorem for symmetric matrices [2]) The real martix is symmetric if and only if there exists a matrix such that and , where is the identity matrix and is a diagonal matrix.
Definition 1.
([4], Definition 3.2.1) Let be a symetric matrix, and let
(4) 
where
is any orthogonal matrix that diagonalizes
, and let be defined as in Eq. (3). The matrix valued function is defined(5) 
Note that depends only on the restriction of to the set of eigenvalues of V. Since is differentiable, the derivative is well defined for . Hence, replacing in Eq. (5) by , we obtain that the matrix function is defined as well. Using , we define the barrier function (or proximity function) as follows
(6) 
When we use the function and its first three derivatives , and
without any specification, it denotes a matrix function if the argument is a matrix and a univariate function (from to ) if the argument is in .
In [5],[6], we can be found some concepts related to matrix functions.
2 Presentation of Problem
The feasible set, the strict feasible set and the solution set of (1) are subsets of denoted respectively by
The set is nonempty and compact, if is not empty and is monotone. As we know, the basic idea of primaldual IPMs is to relax the third equation (complementarity condition) in problem (1) with the following parameterized system
(7) 
where , and I is the identity matrix.
Since is a linear monotone transformation and (SDLCP) is strictly feasible (i.e., there exists ), the System (7) has a unique solution for any . As the sequence approaches the solution of problem (SDLCP).
Suppose that the point is strictly feasible. The natural way to define a search direction is to follow the Newton approach to linearize the first equation in System (7) by replacing and with and , respectively. This leads to the following system
(8) 
Or equivalently
(9) 
In general case, the Newton system has a unique solution not necessarily symmetric, because is not symmetric due to the matrix . Many researchers have proposed methods for symmetrizing the first equation in System (9
) by using an invertible matrix
and the term is replaced by . Thus, we obtain(10) 
In [17], Todd studied several symmetrization schema. Among them, we consider the NesterovTodd(NT) symmetrization schema where is defined as
Let where denotes the symmetric square root of
The matrix can be used to scale and to the same matrix defined by
(11) 
thus we have
(12) 
Note that the matrix and are symmetric and positive definite. Let us further define
(13) 
So by using Eqs.(11) and (13), the System (10) becomes
(14) 
Where and
The linear transformation is also monotone on . Under our hypothesis the new linear System (14) has a unique symmetric solution . These directions are not orthogonal, because
thus, it is only difference between SDO and SDLCP problem.
So far, we have described the schema that defines classical NTdirection. Now following [4, 11, 12] and [18], we replace the right hand side of the first equation in System(14) by . Thus we will use the following system to define new search directions
(15) 
The new search directions and are obtained by solving System (15), so that and are computed via Eq.(13). By taking along the search directions with a step size defined by some line search rules, we can construct a new couple according to and .
3 Properties of New Kernel Function
In this part, we present the new kernel function and we give those properties that are crucial in our complexity analysis.
Let
(16) 
We list the first three derivatives of as below
(17) 
(18) 
(19) 
Lemma 1.
Proof.
Now let as define the proximity measure as follows
(25) 
Note that
Theorem 2.
[14] Suppose that and are symmetric positive definite and is the real valued matrix function induced by the matrix function Then,
Lemma 2.
For , we have the following
 1.

is exponential convex, for all ,
 2.

, ,
 3.

,
Now, let be the inverse function of , for all then we have the following lemma
Lemma 3.
For , we have
Theorem 3.
[4] Let be the inverse function of , . Then we have
Theorem 4.
Let and . If , then we have
4 Complexity Analysis
4.1 An estimation of the step size
The aim of this paper is to define a new kernel function, and to obtain new complexity results for an (SDLCP) problem, during an inner iteration, we compute a default step size , and the decrease of the proximity function.
After an inner iteration, new iterates and
are generated, where is the step size and , and are defined by (13).
On the other hand, from (11), we have and it is clear that the matrix is similar to the matrix
By assuming that and for such feasible step size and we deduce that they have the same eigenvalues.
Since the proximity after one step is defined by :
By Theorem (2), we have .
Define, for , and
Then is the difference of the proximity between a new iterate and a current iterate for a fixed .
It is easily seen that, and .
Furthermore, is a convex function.
Now, to estimate the decrease of the proximity during one step, we need the two successive derivatives of with respect to .
By using the rule of differentiability [6], [14], we obtain
and
Hence, by using (13) and (25), we obtain
In what follows, we use the short notation
Lemma 4.
[4, Lemma 3.4.4] One has
Lemma 5.
Lemma 6.
[4, Lemma 3.4.6] Let denote the inverse function of the restriction of on the interval (0,1], then the largest possible value of the step size of satisfying is given by
Lemma 7.
Let and the same as be defined in Lemma (6). Then
We need to compute ; where be the inverse of for all . This implies
Using the definition of and (*). If we have and
(26) 
(27) 
Next lemma shows that the proximity function with the default step size is decreasing
Lemma 8.
[5, Lemma 3.4] Let be a twice differentiable convex function with , and let attains its (global) minimum at . If is increasing for , then holds the condition of the above lemma, for all
(28) 
Then we have the following lemmas to obtain the upper bound for the decreasing value of the proximity in the inner iteration.
Lemma 9.
For any satisfying , we have :
Proof.
.
4.2 Iteration bound
To come back to the situation where after update, we have to count how many inner iterations. Let the value of after update be denoted by and the subsequent values by , for , where is the total number of inner iterations per the outer iteration. Then we have
(30) 
Lemma 11.
Letting , ,
Lemma 12.
Let be the total number of inner iterations in the outer iteration. Then we have
Proof.
Using Lemma (11), we get the result. ∎
Now, we estimate the total number of iterations of our algorithm.
Theorem 5.
If , the total number of iterations is not more than
Proof.
In the algorithm, and By simple computation, we have
By multiplying the number of outer iterations and the number of inner iterations, we get an upper bound for the total number of iterations, namely
This completes the proof. ∎
we assume that , and
The algorithm will obtain the solution of the problem at most .
5 Numerical results
The main purpose of this section is to present three monotone SDLCPs for testing the effectiveness of algorithm. The implementation is manipulated in ”Matlab”. Here we use ”inn”, ”out” and ”T” which means the inner, outer iterations number and the time produced by the algorithm 1, respectively. The choice of different values of the parameters shows their effect on reducing the number of iterations.
In all experiments, we use , , , and , the theoretical barrier parameter . We provide a feasible initial point such that IPC and are satisfied.
The first example is the monotone SDLCP defined by two sided multiplicative linear transformation [1] . The second is monotone SDLCP which is equivalent to the symmetric semidefinite least squares(SDLS)problem and the third one is reformulated from nonsymmetric semidefinite least squares(NSSDLS)problem [10], in the second and third example, is Lyaponov linear transformation.
Example 1.
The data of the monotone SDLCP is given by
where
and
The strictly feasible initial starting point is given by
The unique solution is given by
The number of inner, outer iterations and the time for several choices of , and obtained by algorithm 1 are presented in the following tables



inn / out / T  inn / out / T  inn /out / T  inn / out / T  
0.3  74 / 22 / 0.17  71 / 19 /0.17  65 / 15 / 0.15  61 / 12 / 0.15 
0.5  43 / 22 / 0.12  41 /19 /0.12  37 / 15 / 0.12  35 / 12 /0.10 
0.7  23 / 22 / 0.09  22 / 19 / 0.17  20 / 15 / 0.07  19 / 12 / 0.07 
0.9  22/ 22 / 0.10  21 / 19 / 0.15  18 / 15 / 0.07  16 / 12 / 0.09 
1  22 / 22 / 0.10  19 / 18 / 0.09  17 / 15 /0.07  15 / 12 / 0.07 
1/log(4)  7 / 22 / 0.07  16 / 19 / 0.09  12 / 15 / 0.09  25 / 12 /0.10 
1/log(4+1)  20 / 22 / 0.09  21 / 19 / 0.06  24 / 15 / 0.09  30 / 12 /0.10 



inn / out / T  inn / out / T  inn /out / T  inn / out / T  
0.3  58 / 7 / 0.14  57 / 6 /0.14  55 / 5 / 0.12  53 / 4 / 0.14 
0.5  31 / 7 / 0.10  31 /6 /0.10  30 / 5 / 0.09  28 / 4 /0.09 
0.7  21 / 7 / 0.09  20 / 6 / 0.07  19 / 5 / 0.09  18 / 4 / 0.09 
0.9  14/ 7 / 0.07  14 / 6 / 0.07  13 / 5 / 0.06  12 / 4 / 0.06 
1  9 / 7 / 0.06  8 / 6 / 0.06  8 / 5 /0.06  7 / 4 / 0.06 
1/log(4)  14 / 7 / 0.07  20 / 6 / 0.07  22 / 5 / 0.09  36 / 4 /0.10 
1/log(4+1)  22 / 7 / 0.07  24 / 6 / 0.09  30 / 5 / 0.09  36 / 4 /0.10 
Comments
There are no comments yet.