# Index Reduction for Second Order Singular Systems of Difference Equations

This paper is devoted to the analysis of linear second order discrete-time descriptor systems (or singular difference equations (SiDEs) with control). Following the algebraic approach proposed by Kunkel and Mehrmann for pencils of matrix valued functions, first we present a theoretical framework based on a procedure of reduction to analyze solvability of initial value problems for SiDEs, which is followed by the analysis of descriptor systems. We also describe methods to analyze structural properties related to the solvability analysis of these systems. Namely, two numerical algorithms for reduction to the so-called strangenessfree forms are presented. Two associated index notions are also introduced and discussed. This work extends and complements some recent results for high order continuous-time descriptor systems and first order discrete-time descriptor systems.

There are no comments yet.

## Authors

• 1 publication
• 1 publication
11/12/2018

### Reciprocal and Positive Real Balanced Truncations for Model Order Reduction of Descriptor Systems

Model order reduction algorithms for large-scale descriptor systems are ...
01/08/2021

### Iterative Rational Krylov Algorithms for model reduction of a class of constrained structural dynamic system with Engineering applications

This paper discusses model order reduction of large sparse second-order ...
03/25/2021

### On the Convexity of Discrete Time Covariance Steering in Stochastic Linear Systems with Wasserstein Terminal Cost

In this work, we analyze the properties of the solution to the covarianc...
07/29/2020

### Interpolatory projection technique for Riccati-based feedback stabilization of index-1 descriptor systems

The aim of the work is to stabilize the unstable index-1 descriptor syst...
02/06/2020

### Data-based computation of stabilizing minimum dwell times for discrete-time switched linear systems

We present an algorithm to compute stabilizing minimum dwell times for d...
01/17/2022

### Control of port-Hamiltonian differential-algebraic systems and applications

The modeling framework of port-Hamiltonian descriptor systems and their ...
03/25/2019

### Second- and Third-Order Asymptotics of the Continuous-Time Poisson Channel

The paper derives the optimal second- and third-order coding rates for t...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In this paper we study second order discrete-time descriptor systems of the form

 Anx(n+2)+Bnx(n+1)+Cnx(n)+Dnu(n)=f(n)  for all n≥n0. (1)

We will also discuss the initial value problem of the associated singular difference equation (SiDE)

 Anx(n+2)+Bnx(n+1)+Cnx(n)=f(n)  for all n≥n0, (2)

together with some given initial conditions

 x(n0+1)=x1, x(n0)=x0. (3)

Here the solution/state , the inhomogeneity , the input , where , and for each . Three matrix sequences , , take values in , and takes values in . We notice that all the results in this paper also can be carried over to the complex case and they can also be easily extended to systems of higher order. However, for sake of simplicity and because this is the most important case in practice, we restrict ourselves to the case of real and second order systems.

The SiDE (2), on one hand, can be considered as the resulting equation obtained by finite difference or discretization of some continuous-time DAEs or constrained PDEs. On the other hand, there are also many models/applications in real-life, which lead to SiDEs, for example Leotief economic models, biological backward Leslie model, etc, see e.g. Aga00 ; Ela13 ; Kel01 ; Lue79 .

While both DAEs and SiDEs of first order have been well-studied from both theoretical and numerical points of view, the same maturity has not been reached for higher order systems. In the classical literature for regular difference equations, e.g. Aga00 ; Ela13 ; Kel01 , usually new variables are introduced to represent some chosen derivatives of the state variable such that a high order system can be reformulated as a first order one. Unfortunately for singular systems, this approach may induce some substantial disadvantages. As have been fully discussed in LosM08 ; MehS06 for continuous-time systems, these disadvantages include: (1st) increase the index of the singular system, and therefore the complexity of a numerical method to solve it; (2nd) increase the computational effort due to the bigger size of a new system; (3rd) affect the controllability/observability of the corresponding descriptor system since there exist situations where a new system is uncontrollable while the original one is. Therefore, the algebraic approach, which treats the system directly without reformulating it, has been presented in LosM08 ; MehS06 ; Wun06 ; Wun08 in order to overcome the disadvantages mentioned above. Nevertheless, even for second order SiDEs, this method has not yet been considered.

Another motivation of this work comes from recent research on the stability analysis of high order discrete-time systems with time-dependent coefficients LinNT16 ; MehT15 . In these works, systems are supposed to be given in either strangeness-free form or linear state-space form. This, however, is not always the case in applications, and hence, a reformulation procedure would be required.

Therefore, the main aim of this article is to set up a comparable framework for second order SiDEs and for discrete-time descriptor systems as well. It is worth marking that the algebraic method proposed in LosM08 ; MehS06 is applicable theoretically but not numerically due to two reasons: (1st) The condensed forms of the matrix coefficients are really big and complicated; (2nd) The system’s transformations are not orthogonal, and hence, not numerically stable. In this work, we will modify this method to make it more concise and also computable in a stable way.

The outline of this paper is as follows. After giving some auxiliary results in Section 2, in Sections 3 and 4 we consecutively introduce index reduction procedures for SiDEs and for descriptor systems. A desired strangeness-free form and a constructive algorithm to get it will be presented in Theorem 3.8 and Algorithm 1 (Section 3). A resulting system from this algorithm allows us to fully analyze structural properties such as existence and uniqueness of a solution, consistency and hidden constraints, etc. For descriptor systems, where feedback also takes part in the regularization/solution procedure, besides the strangeness-free form presented in Theorem 4.7, regularization via first order feedback is discussed in Theorem 4.4. In order to get stable numerical solutions of these systems, in Section 5 we study the difference array approach in Algorithm 2 and Theorem 5.6 aiming at bringing out the strangeness-free form of a given system. Finally, we finish with some conclusions.

## 2 Preliminaries

In the following example we demonstrate some difficulties that may arise in the analysis of second order SiDEs.

###### Example 2.1.

Consider the following second order descriptor system, motivated from Example 2, MehS06 .

 [1000]x(n+2)+[1000]x(n+1)+[0110]x(n)−[11]u(n)=[f1(n)f2(n)], n≥n0. (4)

Clearly, from the second equation , we can shift the time forward to obtain

Inserting these into the first equation of (4), we find out the hidden constraint

 f2(n+2)+u(n+2)+f2(n+1)+u(n+1)+[01]x(n)=f1(n) .

Consequently, we deduce the following system, which possess a unique solution

 [0110]x(n)=[f1(n)−f2(n+2)−f2(n+1)−u(n+2)−u(n+1)u(n)+f2(n)], n≥n0.

Let in this new system, we obtain a constraint that must obey. This example showed us some important facts. Firstly, one can use some shift operators and row-manipulation (Gaussian eliminations) to derive hidden constraints. Secondly, a solution only exists if initial conditions and an input fulfill certain consistency conditions. Finally, in this example the solution depends on the future input. This property is called non-causality and cannot happen in the case of regular difference equations.

For matrices , , the pair is said to have no hidden redundancy if

 rank([QP])=rank(Q)+rank(P).

Otherwise, is said to have hidden redundancy

. The geometrical meaning of this concept is that the intersection space

contains only the zero-vector

. Here, for any given matrix , by we denote its transpose. We denote by (resp., ) the real vector space spanned by the rows of (resp., rows of ).

###### Lemma 2.2.

(HaM12 ) Consider full row rank matrices , , and assume that for none of the matrix pairs has a hidden redundancy. Then has full row rank.

Lemma 2.3 below will be very useful later for our analysis, in order to remove hidden redundancy in the coefficients of (2).

###### Lemma 2.3.

Consider two matrix sequences , which take values in and , respectively. Furthermore, assume that they satisfy the constant rank assumptions

 rank(Qn)=rQ,  and rank([PnQn])=r[P;Q]  for all n≥n0 .

Then there exists a matrix sequence in such that the following conditions hold.

1. , , ,

2. is orthogonal, and ,

3. has full row rank, and the pair has no hidden redundancy.

###### Proof.

Since the proof is essentially the same as in the continuous-time case, we refer the interested readers to the proof of Lemma 2.7, HaMS14 . ∎

###### Remark 2.4.

i) In the special case, where has full row rank and the pair has no hidden redundancy, we will adapt the notation of an empty matrix and take , , .

ii) Furthermore, we notice, that whenever the smallest singular value of

and the largest one do not differ very much in size, then we can stably compute the matrix . Both matrices and will play the key role in our index reduction procedure presented in the next section.

For any given matrix , by

we denote an orthogonal matrix whose columns span the left null space of

. By we denote an orthogonal matrix whose columns span the vector space . From basic linear algebra, we have the following lemma.

###### Lemma 2.5.

The matrix is nonsingular, the matrix has full row rank, and the following identity holds

 [TT⊥(M)TT0(M)]M=[TT⊥(M)M0].
###### Proof.

A simple proof can be found, for example, in GolV96 . ∎

## 3 Strangeness-index of second order SiDEs

In this section, we study the solvability analysis of the second order SiDE (2) and that of its corresponding IVP (2)–(3). Many regularization procedures and their associated index notions have been proposed for first order systems, see the survey Meh13 and the references therein. Nevertheless, for high order systems, only the strangeness-index has been proposed in the continuous-time case in MehS06 ; Wun08 . Thus, it is our purpose to construct a comparable regularization and index concept for discrete-time system (2).

Let

we call the behavior matrix sequence of system (2). Thus, (2) can be rewritten as

 MnX(n)=f(n)  for all n≥n0. (5)

Clearly, by scaling (2) with a pointwise nonsingular matrix sequence in , we obtain a new system

 [PnAnPnBnPnCn]X(n)=Pnf(n)  for all n≥n0, (6)

without changing the solution space. This motivates the following definition.

###### Definition 3.1.

Two behavior matrix sequences and are called (strongly) left equivalent if there exists a pointwise nonsingular matrix sequence such that for all . We denote this equivalence by . If this is the case, we also say that two SiDEs (2), (6) are left equivalent.

###### Lemma 3.2.

Consider the behavior matrix sequence of system (2). Then for all , we have that

 {Mn}n≥n0 ℓ∼ ⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩⎡⎢ ⎢ ⎢ ⎢⎣An,1Bn,1Cn,10Bn,2Cn,200Cn,3000⎤⎥ ⎥ ⎥ ⎥⎦⎫⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪⎭n≥n0,r2,nr1,nr0,nvn (7)

where the matrices , , have full row rank. Here, the numbers  , , ,  are row-sizes of the block rows of . Furthermore, these numbers are invariant under left equivalent transformations. Thus, we can call them the local characteristic invariants of the SiDE (2).

###### Proof.

The block diagonal form (7) is obtained directly by consecutively compressing the block columns , , of via Lemma 2.5. In details, we have that

 rows of An,1 form the basis of the space range(ATn), rows of Bn,2 form the basis of the space range(TT0(An)Bn)T, rows of Cn,3 form the basis of the space range(TT0([ATn BTn]T)Cn)T.

Moreover, from (7), we obtain the following identities

 r2,n = rank(An), r1,n = r0,n = vn = m−r2,n−r1,n−r0,n ,

which proves the second claim. ∎

Analogous to the continuous-time case, we will apply an algebraic approach (see Bru09 ; MehS06 ), which aims to reformulate (2) into a so-called strangeness-free form, as stated in the following definition.

###### Definition 3.3.

(LinNT16 ) System (2) is called strangeness-free if there exists a pointwise nonsingular matrix sequence such that by scaling the SiDE (2) at each point with , then we obtain a new system of the form

 ^r2^r1^r0^v⎡⎢ ⎢ ⎢ ⎢⎣^An,1000⎤⎥ ⎥ ⎥ ⎥⎦x(n+2)+⎡⎢ ⎢ ⎢ ⎢⎣^Bn,1^Bn,200⎤⎥ ⎥ ⎥ ⎥⎦x(n+1)+⎡⎢ ⎢ ⎢ ⎢ ⎢⎣^Cn,1^Cn,2^Cn,30⎤⎥ ⎥ ⎥ ⎥ ⎥⎦x(n)=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣^f1(n)^f2(n)^f3(n)^f4(n)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦, (8)

for all , where matrix always has full row rank.

In order to perform an algebraic approach, an additional assumption below is usually needed.

###### Assumption 3.4.

Assume that the local characteristic invariants , , become global, i.e., they are constant for all .  Furthermore, assume that two matrix sequences and have constant rank for all .

###### Remark 3.5.

Following directly from the proof of Lemma 3.2, we see that Assumption 3.4 is satisfied if and only if five following constant rank conditions are satisfied

 rank(An)≡\rm const., rank([AnBn])≡\rm const., rank([AnBnCn])≡\rm const., (9)
###### Remark 3.6.

In (8), the quantities , , and are dimensions of the second order dynamics part, the first order dynamics part, and the algebraic (zero order) part, respectively. Furthermore,

is exactly the degree of freedoms.

Let us call the number   the upper rank of system (2). Clearly, is invariant under left equivalence transformations. Rewrite (5) block row-wise, we obtain the following system for all .

 An,1x(n+2)+Bn,1x(n+1)+Cn,1x(n) =f1(n), r2 equations, (10a) Bn,2x(n+1)+Cn,2x(n) =f2(n), r1 equations, (10b) Cn,3x(n) =f3(n), r0 equations, (10c) 0 =f4(n), v equations. (10d)

Since the matrices , , have full row rank, the number of scalar difference equations of order (resp. , and ) in (2) is exactly (resp. and ), while is the number of redundant equations.  Now we are able to define the shift-forward operator , which acts on some or whole equations of system (10). This operator maps each equation of system (10) at the time instant to the equation itself at the time , for example

 Δ:Cn,3x(n)=f3(n)↦Cn+1,3x(n+1)=f3(n+1). (11)

Clearly, under Assumption 3.4, this shift operator can be applied to equations of system (10). In order to reveal all hidden constraints of (10) we propose the idea that for each , we use equations of order less than to reduce the number of scalar equations of order . This task will be performed as follows. Firstly, by applying Lemma 2.3 to two matrix pairs and , we obtain matrix sequences , , and , , of appropriate sizes such that for all , the following conditions hold true.

1. For , the matrices are orthogonal.

2. The following identities hold true.

 Z(1)nBn,2+Z(3)nCn+1,3 = 0, (12a) Z(2)nAn,1+Z(4)nBn+1,2+Z(5)nCn+2,3 = 0. (12b)
3. Both matrix pairs , have no hidden redundancy.

Now we will transform the SiDE (2) as in Lemma 3.7 below.

###### Lemma 3.7.

Assume that Assumption 3.4 is satisfied. Let the matrix sequences , , and , be defined as above. Then the SiDE (2) has exactly the same solution set as the transformed system

 d2s2d1s1r0v ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣S(2)nAn,1S(2)nBn,1 S(2)nCn,10Z(2)nBn,1+Z(4)nCn+1,2 Z(2)nCn,10S(1)nBn,2 S(1)nCn,200 Z(1)nCn,200 Cn,300 0⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦⎡⎢⎣x(n+2)x(n+1)x(n)⎤⎥⎦=
 =⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣S(2)nf1(n)Z(2)nf1(n)+Z(4)nf2(n+1)+Z(5)nf3(n+2)S(1)nf2(n)Z(1)nf2(n)+Z(3)nf3(n+1)f3(n)f4(n)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦  for all n≥n0. (13)

Furthermore, both matrix pairs , have no hidden redundancy.

###### Proof.

The proof is not too difficult but rather lengthy and technical, so we leave it to A. ∎

Consider system (13), we see that the upper rank of the behavior matrix is

 rnewu ⩽ 3d2+2(s2+d1)+(s1+r0) = 3(r2−s2)+2(s2+r1−s1)+(s1+r0) = r−(s2+s1)⩽r.

In conclusion, after performing a so-called index reduction step, which passes from (10) to (13), we have reduced the upper rank at least by . Continue in this fashion until , we obtain the following algorithm.

After each index reduction step the upper rank has been decreased at least by , so Algorithm 1 terminates after a finite number of iterations, which will be called the strangeness-index of the SiDE (2).

###### Theorem 3.8.

Consider the SiDE (2) and assume that Assumption 3.4 is satisfied for any and any considered within the loop, such that the strangeness-index is well-defined by Algorithm 1. Then the SiDE (2) has the same solution set as the strangeness-free SiDE

 rμ2rμ1rμ0vμ⎡⎢ ⎢ ⎢ ⎢ ⎢⎣^An,1^Bn,1^Cn,10^Bn,2^Cn,200^Cn,3000⎤⎥ ⎥ ⎥ ⎥ ⎥⎦⎡⎢⎣x(n+2)x(n+1)x(n)⎤⎥⎦=⎡⎢ ⎢ ⎢ ⎢⎣^g1(n)^g2(n)^g3(n)^g4(n)⎤⎥ ⎥ ⎥ ⎥⎦  for all  n≥n0, (14)

where the matrix has full row rank for all , and the functions and consist of the components of (at most).

###### Proof.

The proof is a direct consequence of Algorithm 1, where the matrix has full row rank due to Lemma 2.2. ∎

To illustrate Algorithm 1, we consider the following example.

###### Example 3.9.

Given a parameter , we consider the second order SiDE

 ⎡⎢⎣1n+1n+4000000⎤⎥⎦x(n+2)+⎡⎢⎣0α2n+31n1000⎤⎥⎦x(n+1)+⎡⎢⎣0n+1000n00n+1⎤⎥⎦x(n)=⎡⎢⎣f1(n)f2(n)f3(n)⎤⎥⎦, (15)

for all . Fortunately, the behavior matrix

is already in the block diagonal form, so we do not need to perform Step 2 in Algorithm 1. Furthermore, all constant rank conditions required in Assumption 3.4 are satisfied. We observe that

 Bn+1,2 Cn+1,3

By directly verifying, we see that the matrix pair has hidden redundancy, while the pair does not. Now we choose , , , . Notice that the fact is non-empty leads to the appearance of . Furthermore, the resulting system (13) reads

 ⎡⎢ ⎢⎣0αn+21n1000⎤⎥ ⎥⎦x(n+1)+⎡⎢ ⎢⎣0n+1000n00n+1⎤⎥ ⎥⎦x(n)=⎡⎢ ⎢⎣f1(n)−f2(n+1)−f3(n+2)f2(n)f3(n)⎤⎥ ⎥⎦. (16)

Since the leading coefficient matrix associated with becomes zero, so for notational convenience we do not write this term. Go back to Step 3, we see that the following two cases may happen.
i) If , then Algorithm 1 terminates here, and the strangeness-index is . The number of time-shift appear in the inhomogeneity in the strangeness-free formulation (16) is .
ii) If , then the matrix pair have hidden redundancy. Now we choose , , . The resulting system (13) now reads

 ⎡⎢ ⎢⎣1n1000000⎤⎥ ⎥⎦x(n+1)+⎡⎢ ⎢⎣00n0n+1000n+1⎤⎥ ⎥⎦x(n) =⎡⎢ ⎢⎣f2(n)f1(n)−f2(n+1)−f3(n+2)−f3(n+1)f3(n)⎤⎥ ⎥⎦. (17)

Algorithm 1 terminates here, and the strangeness-index is . However, the number of time-shifts appearing in the inhomogeneity in the strangeness-free formulation (3.9) remains .

As a direct consequence of Theorem 3.8, we obtain the solvability for (2) as follows.

###### Corollary 3.10.

Under the assumption of Theorem 3.8, the following statements hold true.

1. The corresponding IVP for the SiDE (2) is solvable if and only if either or for all . Furthermore, it is uniquely solvable if, in addition, we have .

2. The initial condition (3) is consistent if and only if the following equalities hold.

 ^Bn0,2x1+^Cn0,2x0 = ^g2(n0), ^Cn0,3x0 = ^g3(n0).

Another direct consequence of Theorem 5.3 is that we can obtain an inherent regular difference equation as follows.

###### Corollary 3.11.

Assume that the IVP (2)-(3) is uniquely solvable for any consistent initial condition. Under the assumption of Theorem 3.8, the solution to this IVP is also a solution to the (implicit) inherent regular difference equation

 ⎡⎢ ⎢ ⎢⎣^An,1^Bn+1,2^Cn+2,3⎤⎥ ⎥ ⎥⎦x(n+2)+⎡⎢ ⎢⎣^Bn,1^Cn+1,20⎤⎥ ⎥⎦x(n+1)+⎡⎢⎣^Cn,100⎤⎥⎦x(n)=⎡⎢⎣^g1(n)^g2(n+1)^g3(n+2)⎤⎥⎦, (18)

where the matrix is invertible for all .

###### Remark 3.12.

Unlike the procedures in Bru09 ; LosM08 ; MehS06 , we do not change the variable . This approach permits us to simplify significantly the condensed forms in these references. We emphasize that as in (3.5), we only require five constant rank conditions within one step of index reduction, instead of seven as in MehS06 . Therefore, by this way the domain of application for SiDEs (and also for DAEs in the continuous-time case) will be enlarged. This approach is also useful for the control analysis of the descriptor system (1), as will be seen in the next section.

###### Remark 3.13.

i) Within one loop of Algorithm 1, for each

, we have used four Singular Value Decompositions (SVDs) to remove the hidden redundancies in two matrix pairs. The total cost depends on the problems itself, i. e., depending on sizes of the matrix pairs which applied SVDs. Nevertheless, it does not exceed

.
ii) Unfortunately, since , , are not orthogonal, in general Algorithm 1 could not be stably implemented. For the numerical solution to the IVP (2)-(3), we will consider a suitable numerical scheme in Section 5.

## 4 Regularization of second order descriptor systems

Based on the index reduction procedure for SiDEs in Section 3, in this section we construct the strangeness-index concept for the descriptor system (1). The solvability analysis for first order descriptor systems with variable coefficients have been carefully discussed in ByeKM97 ; KunMR01 ; Rat97 . Nevertheless, for second order descriptor systems, this problem has been rarely considered. We refer the interested readers to LosM08 ; Wun08 for continuous-time systems.

It is well known that in regularization procedures of continuous-time systems, one should avoid differentiating equations that involve an input function, due to the fact that it may not be differentiable. We will also keep this spirit, and hence, will not shift any equation that involve an input function, since it may destroy the causality of the considered system, as in Example 2.1. Instead of it, we will also incorporate proportional state and first order feedback within each index reduction step of the regularization procedure, as will be seen later.  Now let us present two auxiliary lemmas, which will be very useful later.

###### Lemma 4.1.

Given four matrices , , in and in . Let us consider the following matrices whose columns span orthogonal bases of the associated vector spaces

 T1basis of kernel(\widecheckAT), and T1,⊥basis of range(\widecheckA),W1basis of kernel(TT1\widecheckD)T, and W1,⊥basis of range(TT1\widecheckD),J\tiny{D}:=WT1,⊥TT1\widecheckD,J\tiny{B1}:=WT1TT1\widecheckB, and J\tiny{B2}:=WT1,⊥TT1,⊥\widecheckB,J\tiny{C1}:=WT1TT1\widecheckC, and J\tiny{C2}:=WT1,⊥TT1\widecheckC,T2basis of kernel(JT\tiny{B1}), and T2,⊥basis of range(J\tiny{B1}),T3basis of kernel(JT\tiny{B2}), and T3,⊥basis of range(J\tiny{B2}),T4basis of kernel(TT2J\tiny{C% 1})T, and T4,⊥basis of range(TT2J\tiny{C1}).

Then the following assertions hold true.

1. The matrices , , and are orthogonal.

2. The matrices , , , , and have full row rank.

3. Moreover, there exists an orthogonal matrix such that

 \widecheckU[\widecheckA\widecheckB\widecheckC\vline\widecheckD]=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣\widecheckA1\widecheckB1\widecheckC1\vline\widecheckD10\widecheckB2\widecheckC2\vline000\widecheckC3\vline0000\vline00\widecheckB4\widecheckC4\vline\widecheckD400\widecheckC5\vline\widecheckD5⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦, (19)

where the matrices , , , , have full row rank.

###### Proof.

The first two claims followed directly from Lemma 2.5. To prove the third claim, we construct the desired matrix as follows

 \widecheckU:=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣I\vline\vline\vlineI\vline