DeepAI

# On the Distribution of GSVD

In this paper, some new results on the distribution of the generalized singular value decomposition (GSVD) are presented.

• 98 publications
• 75 publications
06/27/2019

### Singular Value Decomposition and Neural Networks

Singular Value Decomposition (SVD) constitutes a bridge between the line...
01/24/2020

### Electric Field Propagation Through Singular Value Decomposition

We demonstrate that the singular value decomposition algorithm in conjun...
10/05/2022

### Convergence rates of the Kaczmarz-Tanabe method for linear systems

In this paper, we investigate the Kaczmarz-Tanabe method for exact and i...
06/03/2020

### Variational Quantum Singular Value Decomposition

Singular value decomposition is central to many problems in both enginee...
02/07/2020

### Randomized Algorithms for Generalized Singular Value Decomposition with Application to Sensitivity Analysis

The generalized singular value decomposition (GSVD) is a valuable tool t...
11/29/2019

### High Order Singular Value Decomposition for Plant Biodiversity Estimation

We propose a new method to estimate plant biodiversity with Rényi and Ra...
09/29/2022

### Generalized matrix nearness problems

We show that the global minimum solution of ‖ A - BXC ‖ can be found in ...

## I Some new results on GSVD

In this section, the GSVD of two Gaussian matrices is defined first. Then, the distribution of the squared generalized singular values are presented.

### I-a Definition of GSVD

Given two matrices and

, whose entries are i.i.d. complex Gaussian random variables with zero mean and unit variance. Let us define

, , and . Then, the GSVD of and can be expressed as follows [1]:

 UAQ=(ΣA,O)andVCQ=(ΣC,O), (1)

where and are two nonnegative diagonal matrices, and are two unitary matrices, and can be expressed as in (9).

Moreover, and have the following form:

 ΣA=⎛⎜⎝IrSAOA⎞⎟⎠andΣC=⎛⎜⎝OCSCIk−r−s⎞⎟⎠, (2)

where and are two nonnegative diagonal matrices, satisfying . Then, the squared generalized singular values can be defined as , .

### I-B Distribution of the squared generalized singular values

To characterize the distribution of the squared generalized singular value , a relationship between

and the eigenvalue of a common matrix model is established first as in the following theorem.

###### Theorem 1

Suppose that and are two Gaussian matrices whose elements are i.i.d. complex Gaussian random variables with zero mean and unit variance and their GSVD is defined as in (1). Without loss of generality, it is assumed that . Then, the distribution of their squared generalized singular values, , is identical to that of the nonzero eigenvalues of , where

 L=XH(YYH)−1X. (3)

and are two independent Gaussian matrices whose elements are are i.i.d. complex Gaussian random variables with zero mean and unit variance. Moreover, , and can be expressed as follows:

 (m′,p,n′)={(n,m,q)q≥n(q,s,n)q

When , and and are deterministic.

###### Proof:

See Appendix A.

The distribution of the nonzero eigenvalues of can be characterized as in the following corollary.

###### Corollary 1

Suppose that , and and are two independent Gaussian matrices whose elements are are i.i.d. complex Gaussian random variables with zero mean and unit variance. And, it is assumed that

. Then, the joint probability density function (p.d.f.) of the nonzero eigenvalues of

can be characterized as follows:

 fm′,p,n′(w1,⋯,wl)=Mm′,p,n′∏li=1wt1i∏li=1(1+wi)t2l∏i

where , , , and can be expressed as follows:

 Mm′,p,n′=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩πm′(m′−1)˜Γm′(p+n′)m′!˜Γm′(p)˜Γm′(n′)˜Γm′(m′)p≥m′πp(p−1)˜Γp(p+n′)p!˜Γp(m′)˜Γp(p+n′−m′)˜Γp(p)p

where and is the complex multivariate gamma function[2].

###### Proof:

Following steps similar to those in [3, Appendix A], the distribution of the nonzero eigenvalues of can be obtained.

Moreover, the marginal p.d.f. of , , can be characterized as in the following lemma.

###### Lemma 1

The marginal p.d.f. derived from (5) can be expressed as follows:

 gm′,p,n′(wl)=Mm′,p,n′g′l,t1,t2(wl)or1w2lMm′,p,n′g′l,t′1,t2(1/wl), (7)

where and can be expressed as follows:

 g′l,t1,t2(wl) = ∑σ1,σ2∈Slsign(σ1)sign(σ2)wt1+2l−σ1(l)−σ2(l)l(1+wl)t2 ×l−1∏i=1B(t1+2l−σ1(i)−σ2(i)+1,t2−t1−2l−1+σ1(i)+σ2(i)),

where , are permutations of length , , , is if the permutation is even and

if it is odd, and

is the Beta function [4].

See Appendix B.

### I-C Some properties about Q

As shown in [1], the GSVD decomposition matrix is often used to construct the precoding matrix at the transmitting end. In this section, some properties about are discussed.

First, define . As shown in [5, eq.(2.2)], can be expressed as

 Q=Q′⎛⎝(WHR)−1On−k⎞⎠, (9)

where and are two unitary matrices, is a nonnegative diagonal matrix and has the same singular values as the nonzero singular values of . Thus, the power of can be expressed as

 trace{QQH} = trace{Q′(R−1WOn−k)(WHR−HOn−k)Q′H} = trace{Q′HQ′(R−1R−HOn−k)} = trace{(RHR)−1} = k∑iλ−1B,i,

where , , are the nonzero eigenvalues of .

Note that and are two independent Gaussian matrices. Then, it is easy to know that is a Wishart matrix. Finally, directly from [6, Lemma 2.10], the following corollary can be derived.

###### Corollary 2

Suppose that and are two Gaussian matrices whose elements are i.i.d. complex Gaussian random variables with zero mean and unit variance and their GSVD is defined as in (1). The average power of the GSVD decomposition matrix can be expressed as follows:

 E{trace{QQH}}=min{m+q,n}|m+q−n|. (11)

## Appendix A: Proof of Theorem 1

### 1. The case when q≥n

When , , , and . Thus, from (1), the GSVD of and can be further expressed as follows:

 UAQ=ΣAandVCQ=ΣC. (12)

Moreover, when , as shown in (9), is nonsingular and it can be shown that

 AHA=Q−HΣHAΣAQ−1andCHC=Q−HΣHCΣCQ−1. (13)

Furthermore, from (2), it can be shown that

 ΣHAΣA=(SHASAOn−s)andΣHCΣC=(SHCSCIn−s). (14)

Thus, can be expressed as

 (CHC)−1AHA=Q⎛⎝SHASA[SHCSC]−1On−s⎞⎠Q−1. (15)

Recall that and . Then, it can be shown that . Finally, it is easy to see that the distribution of , is identical to that of the nonzero eigenvalues of , where

 L = A(CHC)−1AH = XH(YYH)−1X.

and are two independent Gaussian matrices whose elements are are i.i.d. complex Gaussian random variables with zero mean and unit variance. Moreover, .

### 2. The case when q<n<(q+m)

When , , , and . As shown in (2), are the squared generalized singular values of and , and .

On the other hand, define and the SVD of as . Note that is a Haar matrix. Divide into the following four blocks:

 P=(P11P12P21P22), (17)

where and . From [5, eq.(2.7)], it is easy to see that equal the non-one eigenvalues of . Moreover, from the fact that is a Haar matrix, it can be shown that

 PPH=PHP=Im+q. (18)

Thus, the following equations can be derived:

 P11PH11+P12PH12=ImandPH22P22+PH12P12=Im+q−n. (19)

Note that and have the same non-zero eigenvalues. Thus, and have the same non-one eigenvalues. Therefore, equal the non-one eigenvalues of . Define as

 P′=(PH22PH12PH21PH11). (20)

It is easy to see that and is also a Haar matrix.

Then, from the above discussions, it can be concluded that the distribution of is identical to the distribution of the non-one singular valus of the or truncated sub-matrix of a Haar matrix. Thus, define and , whose entries are i.i.d. complex Gaussian random variables with zero mean and unit variance. The distribution of the squared squared generalized singular values of and , is identical to that of the squared squared generalized singular values of and . Since , from Appendix A-1, it can be known that the distribution of the squared squared generalized singular values of and , is identical to that of the eigenvalues of , where and are two independent Gaussian matrices whose elements are are i.i.d. complex Gaussian random variables with zero mean and unit variance.

This completes the proof of the theorem.

## Appendix B: Proof of Lemma 1

First, the marginal p.d.f. derived from (5) can be expressed as follows:

 gm′,p,n′(wl) = ∫∞0⋯∫∞0Mm′,p,n′∏li=1wt1i∏li=1(1+wi)t2l∏i

where , and

 g′l,t1,t2(wl) = ∫∞0⋯∫∞0f′l,t1,t2(w1,⋯,wl)dw1⋯dwl−1 = ∫∞0⋯∫∞0∏li=1wt1i∏li=1(1+wi)t2l∏i

Note that can be expressed as

 l∏i

Thus, can be further expressed as

 l∏i

Therefore, can be expressed as

 g′l,t1,t2(wl) = ∑σ1,σ2∈Slsign(σ1)sign(σ2)∫∞0⋯∫∞0l∏i=1wt1+2l−σ1(i)−σ2(i)i(1+wi)t2dw1⋯dwl−1. = ∑σ1,σ2∈Slsign(σ1)sign(σ2)wt1+2l−σ1(l)−σ2(l)l(1+wl)t2l−1∏i=1∫∞0wt1+2l−σ1(i)−σ2(i)i(1+wi)t2dwi.

Moreover, from Eq.(3.194.3) [4], it can be shown that Then, can be expressed as

 g′l,t1,t2(wl) = ∑σ1,σ2∈Slsign(σ1)sign(σ2)wt1+2l−σ1(l)−σ2(l)l(1+wl)t2 ×l−1∏i=1B(t1+2l−σ1(i)−σ2(i)+1,t2−t1−2l−1+σ1(i)+σ2(i)).

On the other hand, as shown in (Appendix B: Proof of Lemma 1), can be be expressed as

 g′l,t1,t2(1/wl) = wt2−t1l(1+wl)t2∫∞0⋯∫∞0∏l−1i−1wt1i∏l−1i=1(1+wi)t2l−1∏i=1(wi−1wl)2 ×l−1∏i

Moreover, define , . Then, can be be further expressed as

 g′l,t1,t2(1/wl) = wt2−t1l(1+wl)t2∫∞0⋯∫∞0w−2(l−1)l∏l−1i−1w′t2−t1−2li∏l−1i=1(1+w′i)t2l−1∏i=1(w′i−wl)2 ×l−1∏i

where . Thus, it can be known that .

This completes the proof of the lemma.