I Introduction
The detection of an unknown noisy signal or a transmit node is the fundamental task in many signal processing and wireless communication applications [1, 2, 3, 4, 5]
. For instance, the stateoftheart of cognitive radio or radar and sonar systems identify the presence of the primary user activity or the existence of the target based on certain statistical properties of the observation vector
[3]. Among all detection techniques, the sample eigenvalue (of the sample covariance matrix) based detection has gained prominence recently (see [6]and references therein). In this context, the largest sample eigenvalue, also known as the Roy’s largest root test, has been popular among detection theorists. Under the common Gaussian setting with white noise, this amounts to determine the largest eigenvalue of a Wishart matrix having a socalled spiked covariance (see
[7, 8] and references therein).Certain practical scenarios give rise to additive correlated noise (also known as colored noise) [9, 10, 11, 5]. Based on the assumption that one has access to signalplusnoise sample covariance matrix and noise only sample covariance matrix, Rao and Silverstein [2] proposed a framework to use the generalized eigenvalues of the whitened signalplusnoise sample covariance matrix for detection. The assumption of having the noise only sample covariance matrix is realistic in many practical situations as detailed in [2]. The fundamental high dimensional limits of the generalized sample eigenvalue based detection in colored noise have been thoroughly investigated in [2]. However, to our best knowledge, a tractable finite dimensional analysis is not available in the literature. Thus, in this paper, we characterize the statistics of the Roy’s largest root in the finite dimensional colored noise setting. The Roy’s largest root of the generalized eigenvalue detection problem in the Gaussian setting amounts to finite dimensional characterization of the largest eigenvalue of the deformed Jacobi ensemble. Various asymptotic expressions for the Roy’s largest root have been derived in [12, 13, 14, 15] for deformed Jacobi ensemble. However, finite dimensional expressions are available for Jacobi ensemble only (i.e., without the deformation) [16, 17]. Although finite dimensional, these expressions are not amenable to further manipulations. Therefore, in this paper, we present simple and tractable closedform solution to the cumulative distribution function (c.d.f.) of the maximum eigenvalue of the deformed Jacobi ensemble. This expression further facilitates the analysis of the receiver operating characteristics (ROC) of the Roy’s largest root test. All these results are made possible due to a novel alternative joint eigenvalue density function that we have derived based on the contour integral approach due to [18, 19, 20, 21, 22].
The key results developed in this paper enable us to understand the joint effect of the system dimensionality (), the number of samples available from the signalplusnoise () and noiseonly () observations, and the signaltonoise ratio () on the ROC. For instance, the relative disparity between and improves the ROC profile for fixed values of the other parameters. However, the general finite dimensional ROC expressions turns out to give little analytical insights. Therefore, in view of obtaining more insights, we have particularly focused on the case for which the system dimensionality equals the number of samples available from the noiseonly observations (i.e., ). Since this equality is the minimum requirement for the validity of the whitening operation, from the ROC perspective, it corresponds to the worst possible case when then other parameters being fixed. It turns out that, under the above scenario, when and increase such that
is fixed, there exists an optimal ROC profile. Therefore, the above insight can be of paramount importance in designing future wireless communication systems (i.e., 5G and beyond) with massive degrees of freedom.
The following notation is used throughout this paper. The superscript indicates the Hermitian transpose, denotes the determinant of a square matrix, represents the trace of a square matrix, and stands for . The identity matrix is represented by and the Euclidean norm of a vector is denoted by . A diagonal matrix with the diagonal entries is denoted by . We denote the unitary group by . Finally, we use the following notation to compactly represent the determinant of an block matrix:
Ii Problem formulation
Consider the generic signal detection problem in colored Gaussian noise: where , , and . Here the noise covariance matrix may be known or unknown at the detector. The classical signal detection problem can be formulated as the following hypothesis testing problem
Nothing that the covariance matrix of can be written as , where denotes the conjugate transpose, we can have the following equivalent form
Let us now consider the matrix with the eigenvalues . As such we have
from which we can observe that in the presence of a signal, the maximum eigenvalue of (i.e., ) is strictly greater than one, whereas the other eigenvalues are equal to one (i.e., ). Capitalizing on this observation Rao and Silverstein [2] concluded that, given the knowledge of and , the maximum eigenvalue of could be used to detect the presence of a signal. It is noteworthy that the matrix is also known as the matrix or Fisher matrix.
In most practical settings, and matrices are unknown. To circumvent this difficulty, it is common to replace and
by their sample estimates. To this end, let us assume that we have
i.i.d. sample observations from signalplusnoise scenario given by , and i.i.d. sample observations from noiseonly scenario given by . Thus, the sample estimates of and become(1) 
where we assume that (this ensures that both and
are positive definite with probability
[23]). Consequently, following Rao [2], we form the matrix(2) 
and focus on its maximum eigenvalue as the test statistic.
^{1}^{1}1This is also known as the Roy’s largest root test which is a consequence of Roy’s union intersection principle [24].. As such, we have and . Noting that the eigenvalues of do not change under the simultaneous transformations , and , without loss of generality we assume that . Therefore, in what follows we focus on the maximum eigenvalue of , where(3)  
(4) 
with and being a unit vector.
Let us denote the maximum eigenvalue of as . Now, in order to assess the performance of the maximumeigen based detector, we need to evaluate the detection^{2}^{2}2This is also known as the power of the test. and false alarm probabilities. They may be expressed as
(5)  
(6) 
where is the threshold. The characterizes the detector and is called the ROC profile.
The main challenge here is to characterize the maximum eigenvalue of under the alternative . To this end, in this paper, we use orthogonal polynomial techniques due to Mehta [25] to obtain a closed form solution to this problem. In particular, we derive an expression which contains a determinant whose dimension depends through the relative difference between and
Iii C.D.F. of the Maximum Eigenvalue
Before proceeding further, we present some fundamental results pertaining to the joint eigenvalue distribution of an matrix and Jacobi polynomials.
Iiia Preliminaries
Definition 1
Let and be two independent Wishart matrices with . Then the joint eigenvalue density of the ordered eigenvalues, , of is given by [26]
(7) 
where is the generalized complex hypergeometric function of two matrix arguments, is the Vandermonde determinant, , and with the complex multivariate gamma function is written in terms of the classical gamma function as
IiiB Finite Dimensional Analysis of the C.D.F.
Having defined the above preliminary quantities, now we focus on deriving a new c.d.f. for the maximum eigenvalue of when the covariance matrix takes the so called rank spiked form. In this case, the covariance matrix can be decomposed as
(9) 
where
is a unitary matrix and
. Following Khatri [28], the hypergeometric function of two matrix arguments given in the join density (1) can be written as a ratio between the determinants of two square matrices. Since the eigenvalues of the matrix are such that has algebraic multiplicity one and has algebraic multiplicity , the resultant ratio takes an indeterminate form. Therefore, one has to repeatedly apply Lôspital’s rule to obtain a deterministic expression. However, that expression is not amenable to apply Mehta’s [25] orthogonal polynomial technique. Therefore, in view of applying the powerful orthogonal polynomial technique, we derive an alternative expression for the joint eigenvalue density. This alternative derivation techniques has also been used earlier in [18] to derive a single contour integral representation for the joint eigenvalue density when the matrices are real^{3}^{3}3It is noteworthy that when the matrices are real, the hypergeometric function of two matrix arguments does not admit such a determinant representation.. The following corollary gives the alternative expression for the joint density.Corollary 1
Let and be independent Wishart matrices with and . Then the joint density of the ordered eigenvalues of is given by
(10) 
where
and .
Proof:
Omitted due to space limitations.
Remark 1
It is worth noting that the function denotes the joint density of the ordered eigenvalues of corresponding to the case and .
Remark 2
Alternatively, the above expression can be used to obtain the joint density of the ordered eigenvalues of deformed Jacobi ensemble, with and .
We may use the above join density to obtain the c.d.f. of the maximum eigenvalue, which is given by the following theorem.
Theorem 1
Let and be independent with and . Then the c.d.f. of the maximum eigenvalue of is given by
(11) 
where Ψ_i,j(t)= (m+i+β1)_j2P_m+ij^(j2,β+j2)(2t+1),
and with and .
Proof:
Omitted due to space limitations.
The new exact c.d.f. expression for the maximum eigenvalue of , which contains the determinant of a square matrix whose dimension depends on the difference , is highly desirable when the difference between and is small irrespective of their individual magnitudes. For instance, when (i.e., ) the determinant vanishes and we obtain a scalar result as shown below. This is one of the many advantages of using the orthogonal polynomial approach. This key representation, also facilitates the derivation of the limiting eigenvalue distribution of the maximum eigenvalue (i.e., the limit when such that is fixed).
Corollary 2
The exact c.d.f. of the maximum eigenvalue of corresponding to is given by F^(0)_λ_max(t;η) =(t1+t)^mp(1+η1+t)^p.
Having armed with the above characteristics of the maximum eigenvalue of , in the following section, we focus on the ROC of the maximum eigenvalue based detector.
Iv ROC of the Maximum Eigenvalue of
Let us now investigate the behavior of detection and false alarm probabilities associated with the maximum eigenvalue based test. To this end, noting that the eigenvalues of and are related by , for , we can represent the c.d.f. of the maximum eigenvalue corresponding to as , where .
Now following Theorem 1 along with with (5), (6), the detection and false alarm probabilities can be written, respectively, as
(12)  
(13) 
In general, deriving a functional relationship between and by eliminating the parametric dependency on is an arduous task. However, when admits zero, we can obtain an explicit relationship between them as shown in the following corollary.
Corollary 3
Let us suppress the parameters , and represent the detection and false alarm probabilities, respectively as and . Then, when , we have the following functional relationship between and
(14) 
From the above relation, taken as a function of , we can easily see that, for , . This conforms the common observation that the SNR is positively correlated with the detection probability for a fixed value of .
The ROC curves corresponding to different parameter settings are shown in Figs. 1 and 2. The ROC of the maximum eigenvalues is shown in Fig. 1 for different SNR values. The ROC improvement with the increasing SNR is clearly visible in Fig. 1. The next important frontier which affects the ROC profile is the dimensionality of the matrices. Therefore, let us now numerically investigate the effect of the matrix dimensions on the ROC profile. To this end, Fig. 2 shows the effect of for . As can be seen, the disparity between and improves the ROC profile. The reason behind this observation is that the quality of the sample covariance matrix is improved when the length of the data record (i.e.,) increases in comparison with the dimensionality of the receiver (i.e., ). Since the minimum requirement for to be invertible is , we can observe the worst ROC performance corresponds to .
The joint effect of and is characterized with respect to the scenario where and both vary such that is constant. After some algebra, we conclude that attains its maximum at (), where
(15) 
Having obtained the upper and lower bounds on , a good approximation of can be written as^{4}^{4}4In general any convex combination of the upper and lower bounds can be a candidate for the .
(16) 
The above process suggests us that when and diverge such that their ratio approaches a certain limit, the maximum eigenvalue gradually loses its power.
To further highlight the accuracy of the proposed approximation, in Fig. 3 we compare the optimal ROC profiles evaluated based on (16) and by numerically optimizing (14). As can be seen from the figure, the disparity between the proposed approximation and the exact optimal solution is insignificant. Therefore, when , under the second scenario, we can choose as per (16) for fixed , , and in view of maximizing the detection probability.
V Conclusion
This paper investigates the signal detection problem in colored noise with unknown covariance matrix. In particular, we focus on detecting the presence of a signal using the maximum generalized eigenvalue of so called whitened sample covariance matrix. Therefore, the performance of this detector amounts to determining the statistics of the maximum eigenvalue of the deformed JUE. To this end, following the powerful orthogonal polynomial approach, we have developed a new expression for the c.d.f. of the maximum eigenvalue of the deformed JUE. We then use this new c.d.f. expression to determine the ROC of the detector. It turns out that, for a fixed SNR, when (i.e., the dimensionality of the detector), (i.e., the number of noiseonly samples), and (i.e., the number of signalplusnoise samples) increase over finite values such that and is constant, we obtain an optimal ROC profile corresponding to specific , and values. Therefore, in the above setting, when , and
increase asymptotically, the maximum eigenvalue gradually loses its detection power. This is not surprising, since under the above asymptotic setting, the detector operates below the so called phase transition where the maximum eigenvalue has no detection power.
References
 [1] R. R. Nadakuditi and A. Edelman, “Sample eigenvalue based detection of highdimensional signals in white noise using relatively few samples,” IEEE Trans. Signal Process., vol. 56, no. 7, pp. 2625–2638, Jul. 2008.
 [2] R. R. Nadakuditi and J. W. Silverstein, “Fundamental limit of sample generalized eigenvalue based detection of signals in noise using relatively few signalbearing and noiseonly samples,” IEEE J. Sel. Topics Signal Process., vol. 4, no. 3, pp. 468–480, Jun. 2010.
 [3] P. Bianchi, M. Debbah, M. Maida, and J. Najim, “Performance of statistical tests for singlesource detection using random matrix theory,” IEEE Trans. Inf. Theory, vol. 57, no. 4, pp. 2400–2419, Apr. 2011.
 [4] R. Couillet and W. Hachem, “Fluctuations of spiked random matrix models and failure diagnosis in sensor networks,” IEEE Trans. Inf. Theory, vol. 59, no. 1, pp. 509–525, Jan. 2013.
 [5] N. Asendorf and R. R. Nadakuditi, “Improved detection of correlated signals in lowrankplusnoise type data sets using informative canonical correlation analysis (ICCA),” IEEE Trans. Inf. Theory, vol. 63, no. 6, pp. 3451–3467, Jun. 2017.
 [6] P. Bianchi, M. Debbah, M. Maida, and J. Najim, “Performance of statistical tests for singlesource detection using random matrix theory,” IEEE Trans. Inf. Theory, vol. 57, no. 4, pp. 2400–2419, Apr. 2011.
 [7] J. Baik, G. B. Arous, and S. Péché, “Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices,” Ann. Probab., vol. 33, no. 5, pp. 1643–1697, 2005.
 [8] J. Baik and J. W. Silverstein, “Eigenvalues of large sample covariance matrices of spiked population models,” J. Multivariate Anal., vol. 97, no. 6, pp. 1382–1408, 2006.
 [9] E. Maris, “A resampling method for estimating the signal subspace of spatiotemporal EEG/MEG data,” IEEE Trans. Biomed. Eng., vol. 50, no. 8, pp. 935–949, Aug 2003.
 [10] J. Vinogradova, R. Couillet, and W. Hachem, “Statistical inference in large antenna arrays under unknown noise pattern,” IEEE Trans. Signal Process., vol. 61, no. 22, pp. 5633–5645, Nov. 2013.
 [11] S. Hiltunen, P. Loubaton, and P. Chevalier, “Large system analysis of a GLRT for detection with large sensor arrays in temporally white noise,” IEEE Trans. Signal Process., vol. 63, no. 20, pp. 5409–5423, Oct. 2015.
 [12] I. M. Johnstone and B. Nadler, “Roy’s largest root test under rankone alternatives,” Biometrika, vol. 104, no. 1, pp. 181–193, 2017.
 [13] P. Dharmawansa, B. Nadler, and O. Shwartz, “Roy’s largest root under rankone alternatives: The complex valued case and applications,” arXiv:1411.4226 [math.ST], Nov. 2014.
 [14] P. Dharmawansa, I. M. Johnstone, and A. Onatski, “Local asymptotic normality of the spectrum of highdimensional spiked Fratios,” arXiv:1411.3875 [math.ST], Nov. 2014.
 [15] Q. Wang and J. Yao, “Extreme eigenvalues of largedimensional spiked Fisher matrices with application,” Ann. Statist., vol. 45, no. 1, pp. 415–460, Feb. 2017.
 [16] P. Koev and I. Dumitriu, “Distribution of the extreme eigenvalues of the complex Jacobi random matrix ensemble,” SIAM. J. Matrix Anal. & Appl., 2005.
 [17] I. Dumitriu, “Smallest eigenvalue distributions for two classes of Jacobi ensembles,” J. Math. Phys., vol. 53, no. 10, p. 103301, Oct. 2012.
 [18] P. Dharmawansa and I. M. Johnstone, “Joint density of eigenvalues in spiked multivariate models,” Stat, vol. 31, pp. 240–249, Jul. 2014.
 [19] A. Onatski, “Detection of weak signals in highdimensional complexvalued data,” Random Matrices: Theory and Applications, vol. 03, no. 01, p. 1450001, 2014.
 [20] D. Passemier, M. R. McKay, and Y. Chen, “Hypergeometric functions of matrix arguments and linear statistics of multispiked Hermitian matrix models,” J. Multivar. Anal., vol. 139, pp. 124–146, 2015.
 [21] M. Y. Mo, “The rank 1 real Wishart spiked model,” Commun. Pure Appl. Math., no. 65, pp. 1528–1638, Nov. 2012.
 [22] D. Wang, “The largest eigenvalue of real symmetric, Hermitian and Hermitian selfdual random matrix models with rank one external source, part I,” J. Stat. Phys., vol. 146, no. 4, pp. 719–761, 2012.
 [23] R. J. Muirhead, Aspects of Multivariate Statistical Theory. John Wiley & Sons, 2009, vol. 197.
 [24] K. V. Mardia, J. T. Kent, and J. M. Bibby, Multivariate Analysis. Academic Press, London, 1979.
 [25] M. L. Mehta, Random Matrices. Academic Press, 2004, vol. 142.
 [26] A. T. James, “Distributions of matrix variates and latent roots derived from normal samples,” The Annals of Mathematical Statistics, pp. 475–501, 1964.
 [27] L. C. Andrews, Special Functions of Mathematics for Engineers. SPIE Press, 1998.

[28]
C. G. Khatri, “On the moments of traces of two matrices in three situations for complex multivariate normal populations,”
Sankhy, vol. 32, pp. 65–80, 1970.
Comments
There are no comments yet.