The distribution of the eigenvalues of random matrices appears in multivariate statistics, including principal component analysis and analysis of large data sets, in physics, including nuclear spectra, quantum theory, atomic physics, in communication theory, especially in relation to multiple-input multiple-output systems, and in signal processing[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. For example, the probability that the eigenvalues of a random symmetric matrix are within an interval finds application in the analysis of the stability in physics, complex networks, complex ecosystems [17, 18, 19, 20, 21], for the analysis of the restricted isometry constant in compressed sensing [14, 22, 23, 24], and it is also related to the expected number of minima in random polynomials . The distribution of the eigenvalues appears also in statistical ranking and selection theory for radar signal processing [26, 27, 28], in cognitive radio systems [29, 30, 31, 32, 33, 34], and for adaptive filter design .
Owing to the difficulties in computing the exact marginal distributions of eigenvalues, asymptotic formulas for matrices with large dimensions are often used as approximations. These approaches allow to investigate only specific subclasses of matrices. For example, the asymptotical distribution of the largest eigenvalue of Wishart matrices is known only for the uncorrelated case . In the presence of correlation, the analysis is much more involved and Gaussian approximations are generally appplied .
For random matrices with finite dimensions (non-asymptotic analysis), the derivation of the distribution of eigenvalues is generally difficult. In particular, for complex matrices, which are the focus of this paper, only few results are available. Expressions for thecumulative distribution function (c.d.f.) of the largest, smallest and largest eigenvalue of a complex Wishart matrix have been obtained in previous works (see for instance [38, 39]); however, the direct computation of the corresponding probability distribution function (p.d.f.)’s from the c.d.f. is not straightforward. A polynomial expression for the p.d.f. largest eigenvalue for the uncorrelated central Wishart case was proposed in [40, 41]. The p.d.f. of the largest eigenvalue for the case of uncorrelated noncentral Wishart was studied in . Expressions for the c.d.f. and a first order expansion for the p.d.f. of in the uncorrelated noncentral case were given in . The p.d.f. of the largest eigenvalue for uncorrelated central, correlated central and uncorrelated noncentral Wishart cases was also studied in [44, 45, 46, 47]. The distribution of the largest eigenvalue and the probability that all eigenvalues are within an interval, as well as efficient recursive methods for their numerical computation, has been found for real and complex Wishart, multivariate Beta (also known as double Wishart or MANOVA), for the Gaussian orthogonal ensemble (GOE) and for the Gaussian unitary ensemble (GUE) [48, 49, 21].111These matrices are also denominated, using the names of the associated weight polynomials, as Laguerre (Wishart), Jacobi (double Wishart), and Hermite (Gaussian) ensembles. Expressions for the joint p.d.f. of subsets of unordered eigenvalues of uncorrelated non central Wishart matrices were given in . Closed form expressions for the marginal c.d.f.s and p.d.f.s of some Hermitian random matrices, which also include Wishart matrices, were given in . The moment generating function (MGF) of the largest eigenvalue for both uncorrelated and correlated central Wishart cases was given in . Besides the finite case, approximations and asymptotics for uncorrelated Wishart and for spiked Wishart have been studied in recent literature (see e.g. [10, 53, 54, 55]).
The goal of the paper is to provide a unified framework for the derivation of marginal distributions, joint distribution and moments of subset of eigenvalues, for a general class of random matrices with finite size, including the GUE, the correlated central Wishart matrices, with as a particular case the spiked Wishart, the uncorrelated noncentral Wishart matrices, and double Wishart matrices (multivariate beta). In particular, we generalize the results in [46, 45] and derive simple expressions for the joint p.d.f. of an arbitrary subset of the eigenvalues.
Indicating with the ordered nonzero eigenvalues for the mentioned random matrices, the contributions of the paper can be summarized as follows:
We derive simple and concise expressions for the p.d.f. of the largest eigenvalue .
We obtain the joint distribution of arbitrary, ordered or unordered, eigenvalues. The joint distribution of two arbitrary ordered eigenvalues is a special case of this more general distribution.
We provide a compact expression for the expectation of statistics of the type , where are arbitrary functions and are the unordered eigenvalues. The joint moments of subsets of eigenvalues can be computed as a particular case.
Throughout the paper, we will use to denote the p.d.f. of the random variable (r.v.) X and to denote the expectation operator. We will use bold for vectors and matrices, so that for example denotes a vector, and denotes a matrix with complex elements, , with denoting the column vector of . We will use or to denote the determinant of , and the superscript for conjugation and transposition. With we indicate the Vandermonde matrix with elements and determinant . We denote by the indicator function
and with the Dirac’s delta function.
The paper is organized as follows. The main theorems to the eigenvalue distribution of some classes of random matrices are provided in Section 2. The proof of the main result is presented in Section 3. Section 4 describes some applications of the results presented in Section 2. The results of Section 2 are also specialized in Section 5 to the case of correlated Wishart matrix. Conclusions are given in Section 6.
Throughout the paper we will generally refer to complex matrices, unless otherwise stated.
2 Main results
The goal of the paper is to provide a unified framework for the derivation of marginal distributions, joint distribution of subset of eigenvalues, and moments for a general class of random matrices with arbitrary size. To this aim, we consider real ordered random variables contained in the interval with , whose ordered joint p.d.f. is of the form
In the previous equation , is a normalizing constant, is an arbitrary function, is a matrix with elements , with is a matrix having elements
with , arbitrary scalar functions and arbitrary constants.
of the eigenvalues of central Wishart or pseudo-Wishart matrices having covariance matrix with arbitrary multiplicity, noncentral Wishart with covariance matrix equal to the identity matrix, multivariate beta (double Wishart) matrices, as well as theGUE [11, 4, 3, 56, 36]. More precisely, some cases where the distribution of the eigenvalues is in the form (1) are the following.
Complex central uncorrelated Wishart matrices: assume a Gaussian complex matrix with independent, identically distributed (i.i.d.) columns, each circularly symmetric with covariance (identity covariance), with , and . The joint p.d.f. of the (real) ordered eigenvalues of the complex Wishart matrix is [7, 10, 11]
where and is a normalizing constant given by
Complex noncentral uncorrelated Wishart matrices: under the same hypothesis of (1), with , the joint p.d.f. of the (real) ordered eigenvalues of the complex noncentral uncorrelated Wishart matrix is given by [57, 46]
where is the rank of , are the ordered eigenvalues of , and is the hypergeometric function.
Multivariate beta (double Wishart) matrices: let denote two independent complex Gaussian matrices, each constituted by zero mean i.i.d.(beta matrix), where and are independent Wishart matrices. These eigenvalues are clearly related to the eigenvalues of (double Wishart or multivariate beta). The joint distribution of the non-null eigenvalues of a multivariate complex beta matrix in the null case can be written in the form [38, 21]
For a rank tensor , we define the pseudo-determinant operator as
where the sums are over all possible permutations, and , of the integers . It is worth noting that can be simplified as
where the element of the matrix is . Therefore, the computational complexity of the pseudo-determinant operator is equivalent to that of conventional determinant operators. In particular, if the matrix remains the same for some permutations , the computational complexity of the operator can be strongly reduced. As a special case, when are independent of , i.e., , we have
i.e., the pseudo-determinant of the tensor degenerates into times the determinant of the matrix .
Using the above definition, we have the following theorem, which represents the generalization of [11, Th. 2] when the integrand function is composed by the product of the determinants of two matrices having different sizes.
Given arbitrary functions and two arbitrary matrices , with elements , and , , with elements as in (2), the following identity holds:
where the multiple integral is over the hypercube
and the elements of the tensor are
Since the integrand function in (11) does not depend on the specific values of the matrices but only on their determinants, in (11) can be replaced by an arbitrary matrix, say , having the same determinant. A possible choice for the elements of is the following
where the elements of are defined in (12). The following Theorem gives the joint distribution of an arbitrary subset of eigenvalues. The joint p.d.f. of arbitrary ordered eigenvalues , with with joint distribution as in (1) is given by
where and the tensor has elements
The function in the previous equation is
and the segment indicator is defined as the unique integer such that , with the convention . See Section 3.
3 Proof of Theorem 2
The marginal distribution of one ordered eigenvalue is obtained in the following Lemma. The p.d.f. of the ordered eigenvalue is given by
and the tensor has elements
For the marginal distribution of the ordered eigenvalue we have to evaluate
where is the vector excluding , and
The previous expression can be rewritten as
Now, due to the symmetry of the function in (1) we can also write
where is defined in (19). To be able to use the operator we must integrate with respect to all variables (this is hypercubical integration domain). To this aim, we use the indicator function defined in the introduction, that, together with the Dirac’s delta function , allows us to write
The marginal joint distribution of any two ordered eigenvalues is given in the following Lemma. The joint p.d.f. of the and ordered eigenvalues, , is given by
the tensor has elements
we finally obtain (28).
For the proof of the general case of Theorem 2, that is, the marginal joint distribution of arbitrary ordered eigenvalues, we follow the same approach used for the two previous Lemmas, generalizing (3) to the case of variables kept fixed and integrating over the remaining ones. In this way we obtain (15), (16), and (17)
4 Some applications of Theorems 2.1 and 2.2
4.1 Expected value of a function of the ordered eigenvalue
The expected value of an arbitrary function of the ordered eigenvalue is given by
and are defined in (20). By direct substitution. By specializing the previous result to we obtain the moments of the distribution of an arbitrary ordered eigenvalue, with we obtain the c.d.f., and with we get the moment generating function (m.g.f.) of .
4.2 Probability that all eigenvalues are within the interval
The probability that all eigenvalues are within the interval is given by
where the tensor has elements
For the proof we note that
4.3 The unordered case: marginal joint distribution of arbitrary eigenvalues.
The joint p.d.f. of arbitrary unordered eigenvalues (note that due to symmetry we can always assume the first without loss in generality) is given by
where the tensor has elements
For the proof we proceed similarly to the previous cases. Note that some results for the unordered case can be also found in .
4.4 The unordered case: expected value, moments and c.d.f. of eigenvalues
The expected value of the product of arbitrary functions applied to the unordered eigenvalues is given by
where the tensor has elements:
Immediate by Theorem 2. Special cases include the joint moments for unordered eigenvalues:
obtained with (by setting for some we obtain the joint moments of the marginal eigenvalues).
5 Results for Complex Wishart Matrices
As previously observed, the expression for the joint p.d.f. of the eigenvalues of complex central Wishart matrices has the same form as in (1). To apply the results of Sections 2 and 4 to the cases of Wishart and pseudo-Wishart matrices, the following Lemma can be used .
Denoting by a complex Gaussian random matrix with zero mean, unit variance, i.i.d. entries and by an positive definite matrix, the joint p.d.f. of the (real) nonzero ordered eigenvalues , with , of the quadratic form is
where , is the () Vandermonde matrix with elements . The constant is given by
where and are the distinct eigenvalues of , with associated multiplicities such that .
The () matrix has elements
where , denotes the unique integer such that
Another interesting special case is when is spiked, i.e., with . For this spiked model correlation we have the following result. Let be a complex Wishart matrix, . Denote the ordered eigenvalues of (spiked covariance matrix). Then, the joint p.d.f. of the ordered eigenvalues of is