# On the Distribution of an Arbitrary Subset of the Eigenvalues for some Finite Dimensional Random Matrices

We present some new results on the joint distribution of an arbitrary subset of the ordered eigenvalues of complex Wishart, double Wishart, and Gaussian hermitian random matrices of finite dimensions, using a tensor pseudo-determinant operator. Specifically, we derive compact expressions for the joint probability distribution function of the eigenvalues and the expectation of functions of the eigenvalues, including joint moments, for the case of both ordered and unordered eigenvalues.

## Authors

• 6 publications
• 5 publications
10/30/2019

### A Matrix-Less Method to Approximate the Spectrum and the Spectral Function of Toeplitz Matrices with Complex Eigenvalues

It is known that the generating function f of a sequence of Toeplitz mat...
08/24/2020

### Eigenvalues and Eigenvectors of Tau Matrices with Applications to Markov Processes and Economics

In the context of matrix displacement decomposition, Bozzo and Di Fiore ...
07/21/2018

### On Numerical Estimation of Joint Probability Distribution from Lebesgue Integral Quadratures

An important application of introduced in [1] Lebesgue integral quadratu...
09/24/2020

### Sampling the eigenvalues of random orthogonal and unitary matrices

We develop an efficient algorithm for sampling the eigenvalues of random...
10/22/2021

### The Eigenvectors of Single-spiked Complex Wishart Matrices: Finite and Asymptotic Analyses

Let 𝐖∈ℂ^n× n be a single-spiked Wishart matrix in the class 𝐖∼𝒞𝒲_n(m,𝐈_n...
08/31/2020

### Calculation of oblate spheroidal wave functions with complex argument

A previous article showed that alternative expressions for calculating o...
01/21/2008

### Complex Eigenvalues for Binary Subdivision Schemes

Convergence properties of binary stationary subdivision schemes for curv...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The distribution of the eigenvalues of random matrices appears in multivariate statistics, including principal component analysis and analysis of large data sets, in physics, including nuclear spectra, quantum theory, atomic physics, in communication theory, especially in relation to multiple-input multiple-output systems, and in signal processing

[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. For example, the probability that the eigenvalues of a random symmetric matrix are within an interval finds application in the analysis of the stability in physics, complex networks, complex ecosystems [17, 18, 19, 20, 21], for the analysis of the restricted isometry constant in compressed sensing [14, 22, 23, 24], and it is also related to the expected number of minima in random polynomials [25]. The distribution of the eigenvalues appears also in statistical ranking and selection theory for radar signal processing [26, 27, 28], in cognitive radio systems [29, 30, 31, 32, 33, 34], and for adaptive filter design [35].

Owing to the difficulties in computing the exact marginal distributions of eigenvalues, asymptotic formulas for matrices with large dimensions are often used as approximations. These approaches allow to investigate only specific subclasses of matrices. For example, the asymptotical distribution of the largest eigenvalue of Wishart matrices is known only for the uncorrelated case [36]. In the presence of correlation, the analysis is much more involved and Gaussian approximations are generally appplied [37].

For random matrices with finite dimensions (non-asymptotic analysis), the derivation of the distribution of eigenvalues is generally difficult. In particular, for complex matrices, which are the focus of this paper, only few results are available. Expressions for the

cumulative distribution function (c.d.f.) of the largest, smallest and largest eigenvalue of a complex Wishart matrix have been obtained in previous works (see for instance [38, 39]); however, the direct computation of the corresponding probability distribution function (p.d.f.)’s from the c.d.f. is not straightforward. A polynomial expression for the p.d.f. largest eigenvalue for the uncorrelated central Wishart case was proposed in [40, 41]. The p.d.f. of the largest eigenvalue for the case of uncorrelated noncentral Wishart was studied in [42]. Expressions for the c.d.f. and a first order expansion for the p.d.f. of in the uncorrelated noncentral case were given in [43]. The p.d.f. of the largest eigenvalue for uncorrelated central, correlated central and uncorrelated noncentral Wishart cases was also studied in [44, 45, 46, 47]. The distribution of the largest eigenvalue and the probability that all eigenvalues are within an interval, as well as efficient recursive methods for their numerical computation, has been found for real and complex Wishart, multivariate Beta (also known as double Wishart or MANOVA), for the Gaussian orthogonal ensemble (GOE) and for the Gaussian unitary ensemble (GUE) [48, 49, 21].111These matrices are also denominated, using the names of the associated weight polynomials, as Laguerre (Wishart), Jacobi (double Wishart), and Hermite (Gaussian) ensembles. Expressions for the joint p.d.f. of subsets of unordered eigenvalues of uncorrelated non central Wishart matrices were given in [50]. Closed form expressions for the marginal c.d.f.s and p.d.f.s of some Hermitian random matrices, which also include Wishart matrices, were given in [51]. The moment generating function (MGF) of the largest eigenvalue for both uncorrelated and correlated central Wishart cases was given in [52]. Besides the finite case, approximations and asymptotics for uncorrelated Wishart and for spiked Wishart have been studied in recent literature (see e.g. [10, 53, 54, 55]).

The goal of the paper is to provide a unified framework for the derivation of marginal distributions, joint distribution and moments of subset of eigenvalues, for a general class of random matrices with finite size, including the GUE, the correlated central Wishart matrices, with as a particular case the spiked Wishart, the uncorrelated noncentral Wishart matrices, and double Wishart matrices (multivariate beta). In particular, we generalize the results in [46, 45] and derive simple expressions for the joint p.d.f. of an arbitrary subset of the eigenvalues.

Indicating with the ordered nonzero eigenvalues for the mentioned random matrices, the contributions of the paper can be summarized as follows:

• We derive simple and concise expressions for the p.d.f. of the largest eigenvalue .

• We obtain the joint distribution of arbitrary, ordered or unordered, eigenvalues. The joint distribution of two arbitrary ordered eigenvalues is a special case of this more general distribution.

• We provide a compact expression for the expectation of statistics of the type , where are arbitrary functions and are the unordered eigenvalues. The joint moments of subsets of eigenvalues can be computed as a particular case.

Throughout the paper, we will use to denote the p.d.f. of the random variable (r.v.) X and to denote the expectation operator. We will use bold for vectors and matrices, so that for example denotes a vector, and denotes a matrix with complex elements, , with denoting the column vector of . We will use or to denote the determinant of , and the superscript for conjugation and transposition. With we indicate the Vandermonde matrix with elements and determinant . We denote by the indicator function

 r(x;a,b)≜{1ifa≤x≤b0elsewhere,

and with the Dirac’s delta function.

The paper is organized as follows. The main theorems to the eigenvalue distribution of some classes of random matrices are provided in Section 2. The proof of the main result is presented in Section 3. Section 4 describes some applications of the results presented in Section 2. The results of Section 2 are also specialized in Section 5 to the case of correlated Wishart matrix. Conclusions are given in Section 6.

Throughout the paper we will generally refer to complex matrices, unless otherwise stated.

## 2 Main results

The goal of the paper is to provide a unified framework for the derivation of marginal distributions, joint distribution of subset of eigenvalues, and moments for a general class of random matrices with arbitrary size. To this aim, we consider real ordered random variables contained in the interval with , whose ordered joint p.d.f. is of the form

 f\boldmathλ(x)=K|Φ(x)|⋅|Ψ(x)|M∏i=1ξ(xi). (1)

In the previous equation , is a normalizing constant, is an arbitrary function, is a matrix with elements , with is a matrix having elements

 Ψi,j={ψi(xj)j=1,…,M¯ψi,jj=M+1,…,N (2)

with , arbitrary scalar functions and arbitrary constants.

Expression (1) is of particular importance in multivariate statistical analysis as it represents the joint p.d.f.

of the eigenvalues of central Wishart or pseudo-Wishart matrices having covariance matrix with arbitrary multiplicity, noncentral Wishart with covariance matrix equal to the identity matrix, multivariate beta (double Wishart) matrices, as well as the

GUE [11, 4, 3, 56, 36]. More precisely, some cases where the distribution of the eigenvalues is in the form (1) are the following.

1. Complex central uncorrelated Wishart matrices: assume a Gaussian complex matrix with independent, identically distributed (i.i.d.) columns, each circularly symmetric with covariance (identity covariance), with , and . The joint p.d.f. of the (real) ordered eigenvalues of the complex Wishart matrix is [7, 10, 11]

 f(x1,…,xM)=K|V(x)|2M∏i=1e−xixn−Mi (3)

where and is a normalizing constant given by

2. Complex noncentral uncorrelated Wishart matrices: under the same hypothesis of (1), with , the joint p.d.f. of the (real) ordered eigenvalues of the complex noncentral uncorrelated Wishart matrix is given by [57, 46]

 f(x1,…,xM)=K|W(x)|⋅|Υ(x)|M∏i=1xn−Mie−xi (4)

where is a normalizing constant [57, 46], the element of is , and the element of , , is

 υi,j=⎧⎪⎨⎪⎩0F1(n−M+1,μjxi)(n−M)!j=1,…,νxM−jij=ν+1,…,M (5)

where is the rank of , are the ordered eigenvalues of , and is the hypergeometric function.

3. Hermitian Gaussian matrices (GUE): the GUE is composed of complex Hermitian random matrices with i.i.d. entries on the upper-triangle, and on the main diagonal. The joint distribution of the ordered eigenvalues can be written as [36]

 f(x1,…,xM)=K|V(x)|2M∏i=1e−x2i (6)

where is a normalizing constant.

4. Multivariate beta (double Wishart) matrices: let denote two independent complex Gaussian matrices, each constituted by zero mean i.i.d.

columns with common covariance. Multivariate analysis of variance (MANOVA) is based on the statistic of the eigenvalues of

(beta matrix), where and are independent Wishart matrices. These eigenvalues are clearly related to the eigenvalues of (double Wishart or multivariate beta). The joint distribution of the non-null eigenvalues of a multivariate complex beta matrix in the null case can be written in the form [38, 21]

 f(x1,…,xM)=K|V(x)|2M∏i=1xmi(1−xi)n (7)

where are related to the dimensions of the matrices , the eigenvalues are in the interval so that , and is a normalizing constant [38, 21].

5. Complex correlated Wishart matrices: in Section  5 we will describe in detail the Wishart case with arbitrary correlation (including the spiked model), for which the joint distribution of the eigenvalues (see (45) and (48)) has the form (1).

In Theorem 2 we first generalise the result [11, Th. 2] to cover the case of matrices having different sizes.

The main result of the paper is then Theorem 2, which gives the marginal joint distribution of arbitrary ordered r.v.s.

For a rank tensor , we define the pseudo-determinant operator as

 T(A)≜∑μsgn(μ)∑αsgn(α)N∏k=1aμk,αk,k (8)

where the sums are over all possible permutations, and , of the integers . It is worth noting that can be simplified as

 T(A)=∑μsgn(μ)detA(μ) (9)

where the element of the matrix is . Therefore, the computational complexity of the pseudo-determinant operator is equivalent to that of conventional determinant operators. In particular, if the matrix remains the same for some permutations , the computational complexity of the operator can be strongly reduced. As a special case, when are independent of , i.e., , we have

 T(A) = N!det({ai,j,1}i,j=1…,N) (10)

i.e., the pseudo-determinant of the tensor degenerates into times the determinant of the matrix .

Using the above definition, we have the following theorem, which represents the generalization of [11, Th. 2] when the integrand function is composed by the product of the determinants of two matrices having different sizes.

Given arbitrary functions and two arbitrary matrices , with elements , and , , with elements as in (2), the following identity holds:

 ∫…∫D|Φ(x)|⋅|Ψ(x)|M∏k=1ξk(xk)dx=T(C) (11)

where the multiple integral is over the hypercube

 D={a≤x1≤b,a≤x2≤b,…,a≤xM≤b}

and the elements of the tensor are

 Ci,j,k=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩∫baΦi(x)Ψj(x)ξk(x)dxi≤M,k≤M∫baΨj(x)ξk(x)dxi>M,k≤M0iM¯Ψj,ki≥k,k>M. (12)

Since the integrand function in (11) does not depend on the specific values of the matrices but only on their determinants, in (11) can be replaced by an arbitrary matrix, say , having the same determinant. A possible choice for the elements of is the following

 ^Φi,j=⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩^Φi(xj)=Φi(xj)i≤M,j≤M^Φi(xj)=1i>M,j≤M1i>M,M

Applying the definition (13), the integral in the left-hand side (LHS) of (11) becomes

 ∫… ∫D∣∣^Φ(x)∣∣⋅|Ψ(x)|M∏k=1ξk(xk)dx = ∫…∫D[∑σsgn% (σ)N∏l=1^Φσl,l]⋅[∑μsgn(μ)N∏k=1Ψμk,k]M∏k=1ξk(xk)dx = ∑μsgn(μ)∑σsgn(σ)N∏k=M+1^Φσk,k¯Ψμk,k∫…∫DM∏k=1^Φσk(xk)Ψμk(xk)ξk(xk)dx = ∑μsgn(μ)∑σsgn(σ)N∏k=M+1^Φσk,k¯Ψμk,kM∏k=1∫ba^Φσk(x)Ψμk(x)ξk(x)dx = ∑μsgn(μ)∑σsgn(σ)N∏k=1Cσk,μk,k=T(C) (14)

where the elements of are defined in (12). The following Theorem gives the joint distribution of an arbitrary subset of eigenvalues. The joint p.d.f. of arbitrary ordered eigenvalues , with with joint distribution as in (1) is given by

 fλi1,λi2,…,λiL(xi1,xi2,…,xiL)=c(i1,i2,…,iL)KT(A) (15)

where and the tensor has elements

 ai,j,k≜⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩∫βαςi(x)Ψj(x)ηk(x)dxk≤M0k>M,iM,i≥k. (16)

The function in the previous equation is

 ηk(x)≜{δ(x−xk)k∈{i1,i2,…,iL}r(x;xiε(k),xiε(k)−1)elsewhere (17)

and the segment indicator is defined as the unique integer such that , with the convention . See Section 3.

## 3 Proof of Theorem 2

In this section we will prove Theorem 2, by first deriving the p.d.f. for one eigenvalue, then for two arbitrary eigenvalues, and finally for the general case of arbitrary eigenvalues.

The marginal distribution of one ordered eigenvalue is obtained in the following Lemma. The p.d.f. of the ordered eigenvalue is given by

 fλℓ(xℓ)=c(ℓ)KT(A) (18)

where

 c(ℓ)≜1(ℓ−1)!(M−i)! (19)

and the tensor has elements

 ai,j,k≜⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩∫βxℓςi(x)Ψj(x)dxk<ℓ≤Mςi(xℓ)Ψj(xℓ)k=ℓ≤M∫xℓαςi(x)Ψj(x)dxℓM,iM,i≥k\par (20)

where

 ςi(x)≜{Φi(x)ξ(x)i≤Mξ(x)i>M. (21)

For the marginal distribution of the ordered eigenvalue we have to evaluate

 fλℓ(xℓ)=∫⋯∫D(xℓ)dλ(ℓ)f(λ(ℓ),xℓ) (22)

where is the vector excluding , and

 D(xℓ)={λ(ℓ):λ1≥⋯λℓ−1≥xℓ≥λℓ+1≥…≥λM}.

The previous expression can be rewritten as

 fλℓ(xℓ) =∫βxℓdλℓ−1⋯∫βλ3dλ2∫βλ2dλ1β>λ1≥λ2≥⋯≥xℓ ×∫xℓαdλℓ+1⋯∫λM−2αdλM−1∫λM−1αdλMxℓ≥λℓ+1≥⋯≥λM>αf(λ(ℓ),xℓ). (23)

Now, due to the symmetry of the function in (1) we can also write

 fλℓ(xℓ)=c(ℓ)∫βxℓ⋯∫βxℓdλ1⋯dλℓ−1∫xℓα⋯∫xℓαdλℓ+1⋯dλMf(λ(ℓ),xℓ)

where is defined in (19). To be able to use the operator we must integrate with respect to all variables (this is hypercubical integration domain). To this aim, we use the indicator function defined in the introduction, that, together with the Dirac’s delta function , allows us to write

 fλℓ(xℓ) =c(ℓ)∫βα⋯∫βαr(λ1;xℓ,β)⋯r(λℓ−1;xℓ,β) ×δ(λℓ−xℓ)r(λℓ+1;α,xℓ)⋯r(λM;α,xℓ)f(λ)dλ. (24)

Then, by using Theorem 2 in (3) with , , and

 ξk(x)=⎧⎨⎩r(x;xℓ,β)ξ(x)k<ℓδ(x−xℓ)ξ(x)k=ℓr(x;α,xℓ)ξ(x)ℓ+1≤k

we get (18) and (20).

The marginal joint distribution of any two ordered eigenvalues is given in the following Lemma. The joint p.d.f. of the and ordered eigenvalues, , is given by

 fλℓ,λs(xℓ,xs)=c(ℓ,s)KT(A) (26)

where

 c(ℓ,s)≜1(ℓ−1)!(s−ℓ−1)!(M−s)! (27)

the tensor has elements

 ai,j,k≜⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩∫βxℓςi(x)Ψj(x)dxk<ℓ≤Mςi(xℓ)Ψj(xℓ)k=ℓ≤M∫xℓxsςi(x)Ψj(x)dxℓM,iM,i≥k (28)

and is defined in (21). For the proof we proceed as for Lemma 3 and obtain

 fλℓ,λs(xℓ,xs) =c(ℓ,s)∫βα⋯∫βαr(λ1;xℓ,β)⋯r(λℓ−1;xℓ,β) ×δ(λℓ−xℓ)r(λℓ+1;xs,xℓ)⋯r(λs−1;xs,xℓ)δ(λs−xs) ×r(λs+1;α,xs)⋯r(λM;α,xs)f(λ)dλ\emph (29)

where is defined in (27). Using Theorem 2 with

 ξk(x)=⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩r(x;xℓ,β)ξ(x)k<ℓδ(x−xℓ)ξ(x)k=ℓr(x;xs,xℓ)ξ(x)ℓ+1≤k

we finally obtain (28).

For the proof of the general case of Theorem 2, that is, the marginal joint distribution of arbitrary ordered eigenvalues, we follow the same approach used for the two previous Lemmas, generalizing (3) to the case of variables kept fixed and integrating over the remaining ones. In this way we obtain (15), (16), and (17)

## 4 Some applications of Theorems 2.1 and 2.2

### 4.1 Expected value of a function of the ℓth––– ordered eigenvalue

The expected value of an arbitrary function of the ordered eigenvalue is given by

 E{φ(λℓ)}=c(ℓ)KI (31)

where

 I ≜ ∫βαφ(xℓ)T(A(xℓ))dxℓ (32) = ∑μsgn(μ)∑αsgn(α)∫βαφ(xℓ)N∏k=1aμk,αk,k(xℓ)dxℓ

and are defined in (20). By direct substitution. By specializing the previous result to we obtain the moments of the distribution of an arbitrary ordered eigenvalue, with we obtain the c.d.f., and with we get the moment generating function (m.g.f.) of .

Note also that in many cases the evaluation of (32) does not require multidimensional numerical integration. For example, as shown by (49), for Wishart matrices the functions can be written in closed form.

### 4.2 Probability that all eigenvalues are within the interval [a,b]

The probability that all eigenvalues are within the interval is given by

 Pr{All eigenvalues are∈[a,b]}=KM!T(A) (33)

where the tensor has elements

 ai,j,k=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩∫baΦi(x)Ψj(x)ξ(x)dxi≤M,k≤M∫baΨj(x)ξ(x)dxi>M,k≤M0iM¯Ψj,ki≥k,k>M (34)

For the proof we note that

 ∫⋯∫b≥x1≥⋯≥xM≥afλ(x)dx=KM!∫ba…∫ba|Φ(x)|⋅|Ψ(x)|M∏k=1ξ(xk)dx. (35)

Then, by applying Theorem 2 we get (33). As a special case, if with ,222This is the case, for instance, of the joint p.d.f. of the eigenvalues of central Wishart matrices the probability that all eigenvalues are within the interval becomes

 Pr{All eigenvalues are∈[a,b]}=K|B| (36)

where has elements

 bi,j≜∫baΦi(x)Ψj(x)ξ(x)dx. (37)

### 4.3 The unordered case: marginal joint distribution of L arbitrary eigenvalues.

The joint p.d.f. of arbitrary unordered eigenvalues (note that due to symmetry we can always assume the first without loss in generality) is given by

 f(unord)λ1,λ2,…,λL(x1,x2,…,xL)=KM!T(A) (38)

where the tensor has elements

 ai,j,k=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩∫βαςi(x)Ψj(x)ηk(x)dxk≤M0k>M,iM,i≥k (39)

with

 ηk(x)≜{δ(x−xk)k≤L1elsewhere. (40)

For the proof we proceed similarly to the previous cases. Note that some results for the unordered case can be also found in [58].

### 4.4 The unordered case: expected value, moments and c.d.f. of L eigenvalues

The expected value of the product of arbitrary functions applied to the unordered eigenvalues is given by

 E{M∏ℓ=1φℓ(λℓ)}=KM!T(A) (41)

where the tensor has elements:

 ai,j,k=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩∫βαΦi(x)Ψj(x)ξ(x)φk(x)(x)dxi≤M,k≤M∫βαΨj(x)ξ(x)φk(x)dxi>M,k≤M0iM¯Ψj,ki≥k,k>M (42)

Immediate by Theorem 2. Special cases include the joint moments for unordered eigenvalues:

 E{λm11⋯λmMM} (43)

obtained with (by setting for some we obtain the joint moments of the marginal eigenvalues).

The joint m.g.f. can be written as

 M\boldmathλ(ν1,…,νM)≜E{M∏ℓ=1eνℓλℓ} (44)

which can be obtained from (41) with .

## 5 Results for Complex Wishart Matrices

As previously observed, the expression for the joint p.d.f. of the eigenvalues of complex central Wishart matrices has the same form as in (1). To apply the results of Sections 2 and 4 to the cases of Wishart and pseudo-Wishart matrices, the following Lemma can be used [56].

Denoting by a complex Gaussian random matrix with zero mean, unit variance, i.i.d. entries and by an positive definite matrix, the joint p.d.f. of the (real) nonzero ordered eigenvalues , with , of the quadratic form is

 f\boldmathλ(x1,…,xM)=K|V(x)|⋅|G(x,ϕ)|M∏i=1ξ(xi) (45)

where , is the () Vandermonde matrix with elements . The constant is given by

 K=(−1)p(n−M)Γ(M)(p)∏Li=1ϕmip(i)∏Li=1Γ(mi)(mi)∏i

where and are the distinct eigenvalues of , with associated multiplicities such that .

The () matrix has elements

 gi,j=⎧⎨⎩gi(xj)=(−xj)d(i)e−ϕ(e(i))xjj=1,…,M¯gi,j=[n−j]d(i)ϕn−j−d(i)(e(i))j=M+1,…,n (47)

where , denotes the unique integer such that

 m1+…+me(i)−1

and

 d(i)=e(i)∑k=1mk−i.

It can be checked that the uncorrelated case (3) is the special case of (45) for .

Another interesting special case is when is spiked, i.e., with . For this spiked model correlation we have the following result. Let be a complex Wishart matrix, . Denote the ordered eigenvalues of (spiked covariance matrix). Then, the joint p.d.f. of the ordered eigenvalues of is

 f\boldmathλ(x1,…,xM) =K|E(x,σ)|⋅