# Detection of a Signal in Colored Noise: A Random Matrix Theory Based Analysis

This paper investigates the classical statistical signal processing problem of detecting a signal in the presence of colored noise with an unknown covariance matrix. In particular, we consider a scenario where m-dimensional p possible signal-plus-noise samples and m-dimensional n noise-only samples are available at the detector. Then the presence of a signal can be detected using the largest generalized eigenvalue (l.g.e.) of the so called whitened sample covariance matrix. This amounts to statistically characterizing the maximum eigenvalue of the deformed Jacobi unitary ensemble (JUE). To this end, we employ the powerful orthogonal polynomial approach to determine a new finite dimensional expression for the cumulative distribution function (c.d.f.) of the l.g.e. of the deformed JUE. This new c.d.f. expression facilitates the further analysis of the receiver operating characteristics (ROC) of the detector. It turns out that, for m=n, when m and p increase such that m/p attains a fixed value, there exists an optimal ROC profile corresponding to each fixed signal-to-noise ratio (SNR). In this respect, we have established a tight approximation for the corresponding optimal ROC profile.

## Authors

• 5 publications
• 6 publications
• 11 publications
• 7 publications
• ### Eigenvalue Based Detection of a Signal in Colored Noise: Finite and Asymptotic Analyses

Signal detection in colored noise with an unknown covariance matrix has ...
02/07/2019 ∙ by Lahiru D. Chamain, et al. ∙ 0

• ### Analysis on the Empirical Spectral Distribution of Large Sample Covariance Matrix and Applications for Large Antenna Array Processing

This paper addresses the asymptotic behavior of a particular type of inf...
03/08/2019 ∙ by Guanping Lu, et al. ∙ 0

• ### On Robust Spectrum Sensing Using M-estimators of Covariance Matrix

In this paper, we consider the spectrum sensing in cognitive radio netwo...
09/10/2019 ∙ by Zhedong Liu, et al. ∙ 0

• ### Optimal Recovery of Mahalanobis Distance in High Dimension

In this paper, we study the problem of Mahalanobis distance (MD) estimat...
04/19/2019 ∙ by Matan Gavish, et al. ∙ 0

• ### MCA Learning Algorithm for Incident Signals Estimation: A Review

Recently there has been many works on adaptive subspace filtering in the...
02/09/2014 ∙ by Rashid Ahmed, et al. ∙ 0

• ### Optimal Sequence and SINR for Desired User in Asynchronous CDMA System

We consider asynchronous CDMA systems in no-fading environments and focu...
02/18/2019 ∙ by Hirofumi Tsuda, et al. ∙ 0

• ### Sequential Subspace Changepoint Detection

We consider the sequential changepoint detection problem of detecting ch...
11/09/2018 ∙ by Liyan Xie, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

The detection of an unknown noisy signal or a transmit node is the fundamental task in many signal processing and wireless communication applications [1, 2, 3, 4, 5]

. For instance, the state-of-the-art of cognitive radio or radar and sonar systems identify the presence of the primary user activity or the existence of the target based on certain statistical properties of the observation vector

[3]. Among all detection techniques, the sample eigenvalue (of the sample covariance matrix) based detection has gained prominence recently (see [6]

and references therein). In this context, the largest sample eigenvalue, also known as the Roy’s largest root test, has been popular among detection theorists. Under the common Gaussian setting with white noise, this amounts to determine the largest eigenvalue of a Wishart matrix having a so-called spiked covariance (see

[7, 8] and references therein).

Certain practical scenarios give rise to additive correlated noise (also known as colored noise) [9, 10, 11, 5]. Based on the assumption that one has access to signal-plus-noise sample covariance matrix and noise only sample covariance matrix, Rao and Silverstein [2] proposed a framework to use the generalized eigenvalues of the whitened signal-plus-noise sample covariance matrix for detection. The assumption of having the noise only sample covariance matrix is realistic in many practical situations as detailed in [2]. The fundamental high dimensional limits of the generalized sample eigenvalue based detection in colored noise have been thoroughly investigated in [2]. However, to our best knowledge, a tractable finite dimensional analysis is not available in the literature. Thus, in this paper, we characterize the statistics of the Roy’s largest root in the finite dimensional colored noise setting. The Roy’s largest root of the generalized eigenvalue detection problem in the Gaussian setting amounts to finite dimensional characterization of the largest eigenvalue of the deformed Jacobi ensemble. Various asymptotic expressions for the Roy’s largest root have been derived in [12, 13, 14, 15] for deformed Jacobi ensemble. However, finite dimensional expressions are available for Jacobi ensemble only (i.e., without the deformation) [16, 17]. Although finite dimensional, these expressions are not amenable to further manipulations. Therefore, in this paper, we present simple and tractable closed-form solution to the cumulative distribution function (c.d.f.) of the maximum eigenvalue of the deformed Jacobi ensemble. This expression further facilitates the analysis of the receiver operating characteristics (ROC) of the Roy’s largest root test. All these results are made possible due to a novel alternative joint eigenvalue density function that we have derived based on the contour integral approach due to [18, 19, 20, 21, 22].

The key results developed in this paper enable us to understand the joint effect of the system dimensionality (), the number of samples available from the signal-plus-noise () and noise-only () observations, and the signal-to-noise ratio () on the ROC. For instance, the relative disparity between and improves the ROC profile for fixed values of the other parameters. However, the general finite dimensional ROC expressions turns out to give little analytical insights. Therefore, in view of obtaining more insights, we have particularly focused on the case for which the system dimensionality equals the number of samples available from the noise-only observations (i.e., ). Since this equality is the minimum requirement for the validity of the whitening operation, from the ROC perspective, it corresponds to the worst possible case when then other parameters being fixed. It turns out that, under the above scenario, when and increase such that

is fixed, there exists an optimal ROC profile. Therefore, the above insight can be of paramount importance in designing future wireless communication systems (i.e., 5G and beyond) with massive degrees of freedom.

The following notation is used throughout this paper. The superscript indicates the Hermitian transpose, denotes the determinant of a square matrix, represents the trace of a square matrix, and stands for . The identity matrix is represented by and the Euclidean norm of a vector is denoted by . A diagonal matrix with the diagonal entries is denoted by . We denote the unitary group by . Finally, we use the following notation to compactly represent the determinant of an block matrix:

 det[aibi,j]i=1,2,…,nj=2,3,…,n=∣∣ ∣ ∣∣a1b1,2b1,3…b1,n⋮⋮⋮⋱⋮anbn,2bn,3…bn,n∣∣ ∣ ∣∣.

## Ii Problem formulation

Consider the generic signal detection problem in colored Gaussian noise: where , , and . Here the noise covariance matrix may be known or unknown at the detector. The classical signal detection problem can be formulated as the following hypothesis testing problem

 H0:ρ=0Signal is absent H1:ρ>0Signal is present.

Nothing that the covariance matrix of can be written as , where denotes the conjugate transpose, we can have the following equivalent form

 H0:R=ΣSignal is absentH1:S=ρhh†+ΣSignal is present.

Let us now consider the matrix with the eigenvalues . As such we have

 Ψ=R−1S=ρΣ−1hh†+I,

from which we can observe that in the presence of a signal, the maximum eigenvalue of (i.e., ) is strictly greater than one, whereas the other eigenvalues are equal to one (i.e., ). Capitalizing on this observation Rao and Silverstein [2] concluded that, given the knowledge of and , the maximum eigenvalue of could be used to detect the presence of a signal. It is noteworthy that the matrix is also known as the -matrix or Fisher matrix.

In most practical settings, and matrices are unknown. To circumvent this difficulty, it is common to replace and

by their sample estimates. To this end, let us assume that we have

i.i.d. sample observations from signal-plus-noise scenario given by , and i.i.d. sample observations from noise-only scenario given by . Thus, the sample estimates of and become

 ˆR=1nn∑ℓ=1nℓn†ℓandˆS=1pp∑k=1xkx†k (1)

where we assume that (this ensures that both and

are positive definite with probability

[23]). Consequently, following Rao [2], we form the matrix

 ˆΨ=ˆR−1ˆS (2)

and focus on its maximum eigenvalue as the test statistic.

111This is also known as the Roy’s largest root test which is a consequence of Roy’s union intersection principle [24].. As such, we have and . Noting that the eigenvalues of do not change under the simultaneous transformations , and , without loss of generality we assume that . Therefore, in what follows we focus on the maximum eigenvalue of , where

 nˆR∼CWm(n,Im) (3) pˆS∼CWm(p,Im+γuu†) (4)

with and being a unit vector.

Let us denote the maximum eigenvalue of as . Now, in order to assess the performance of the maximum-eigen based detector, we need to evaluate the detection222This is also known as the power of the test. and false alarm probabilities. They may be expressed as

 PD(γ,μ)=Pr(^λmax(γ)>μth|H1) (5) PF(γ,μ)=Pr(^λmax(γ)>μth|H0) (6)

where is the threshold. The characterizes the detector and is called the ROC profile.

The main challenge here is to characterize the maximum eigenvalue of under the alternative . To this end, in this paper, we use orthogonal polynomial techniques due to Mehta [25] to obtain a closed form solution to this problem. In particular, we derive an expression which contains a determinant whose dimension depends through the relative difference between and

## Iii C.D.F. of the Maximum Eigenvalue

Before proceeding further, we present some fundamental results pertaining to the joint eigenvalue distribution of an -matrix and Jacobi polynomials.

### Iii-a Preliminaries

###### Definition 1

Let and be two independent Wishart matrices with . Then the joint eigenvalue density of the ordered eigenvalues, , of is given by [26]

 f(λ1,⋯,λm) =K1(m,n,p)detp(Σ)m∏j=1λp−mjΔ2m(λ) ×1˜F0(p+n;−Σ−1,Λ) (7)

where is the generalized complex hypergeometric function of two matrix arguments, is the Vandermonde determinant, , and with the complex multivariate gamma function is written in terms of the classical gamma function as

###### Definition 2

Jacobi polynomials can be defined as follows [27, eq. 5.112]

 P(a,b)n(x)=n∑k=0(n+an−k)(n+k+a+bk)(x−12)k (8)

where , with .

### Iii-B Finite Dimensional Analysis of the C.D.F.

Having defined the above preliminary quantities, now we focus on deriving a new c.d.f. for the maximum eigenvalue of when the covariance matrix takes the so called rank- spiked form. In this case, the covariance matrix can be decomposed as

 Σ=Im+ηvv†=Vdiag(1+η,1,1,…,1)V† (9)

where

is a unitary matrix and

. Following Khatri [28], the hypergeometric function of two matrix arguments given in the join density (1) can be written as a ratio between the determinants of two square matrices. Since the eigenvalues of the matrix are such that has algebraic multiplicity one and has algebraic multiplicity , the resultant ratio takes an indeterminate form. Therefore, one has to repeatedly apply Lôspital’s rule to obtain a deterministic expression. However, that expression is not amenable to apply Mehta’s [25] orthogonal polynomial technique. Therefore, in view of applying the powerful orthogonal polynomial technique, we derive an alternative expression for the joint eigenvalue density. This alternative derivation techniques has also been used earlier in [18] to derive a single contour integral representation for the joint eigenvalue density when the matrices are real333It is noteworthy that when the matrices are real, the hypergeometric function of two matrix arguments does not admit such a determinant representation.. The following corollary gives the alternative expression for the joint density.

###### Corollary 1

Let and be independent Wishart matrices with and . Then the joint density of the ordered eigenvalues of is given by

 f(λ1,⋯,λm)=fuc(λ1,⋯,λm)fcor(λ1,⋯,λm) (10)

where

 fuc(λ1,⋯,λm)=K1(m,n,p)m∏j=1λp−mj(1+λj)p+nΔ2m(λ),
 fcor(λ1,⋯,λm) =K2(m,n,p)ηm−1(1+η)p+1−mm∏j=1(1+λj) ×m∑k=1(1+λk)p+n−1m∏j=1j≠k(λk−λj)(1+λkη+1)p+n+1−m,

and .

###### Proof:

Omitted due to space limitations.

###### Remark 1

It is worth noting that the function denotes the joint density of the ordered eigenvalues of corresponding to the case and .

###### Remark 2

Alternatively, the above expression can be used to obtain the joint density of the ordered eigenvalues of deformed Jacobi ensemble, with and .

We may use the above join density to obtain the c.d.f. of the maximum eigenvalue, which is given by the following theorem.

###### Theorem 1

Let and be independent with and . Then the c.d.f. of the maximum eigenvalue of is given by

 F(α)λmax(t;η) =K(m,p,α)(p−1)!(1+η)p(t1+t)m(α+β+m) ×det[Φi(t,η)Ψi,j(t)]i=1,2,...,α+1j=2,3,...,α+1 (11)

where Ψ_i,j(t)= (m+i+β-1)_j-2P_m+i-j^(j-2,β+j-2)(2t+1),

 Φi(t,η) =Qi(m,n,p)α−i+1∑k=0(p+i−1)k(α−i+2)!k!(p+m+2i−2)k(α−i−k+1)! ×(ηt)k+i−1((1+η)(1+t))p+k(1+η+t)p+k+i−1,

and with and .

###### Proof:

Omitted due to space limitations.

The new exact c.d.f. expression for the maximum eigenvalue of , which contains the determinant of a square matrix whose dimension depends on the difference , is highly desirable when the difference between and is small irrespective of their individual magnitudes. For instance, when (i.e., ) the determinant vanishes and we obtain a scalar result as shown below. This is one of the many advantages of using the orthogonal polynomial approach. This key representation, also facilitates the derivation of the limiting eigenvalue distribution of the maximum eigenvalue (i.e., the limit when such that is fixed).

###### Corollary 2

The exact c.d.f. of the maximum eigenvalue of corresponding to is given by F^(0)_λ_max(t;η) =(t1+t)^mp(1+η1+t)^-p.

Having armed with the above characteristics of the maximum eigenvalue of , in the following section, we focus on the ROC of the maximum eigenvalue based detector.

## Iv ROC of the Maximum Eigenvalue of ˆΨ

Let us now investigate the behavior of detection and false alarm probabilities associated with the maximum eigenvalue based test. To this end, noting that the eigenvalues of and are related by , for , we can represent the c.d.f. of the maximum eigenvalue corresponding to as , where .

Now following Theorem 1 along with with (5), (6), the detection and false alarm probabilities can be written, respectively, as

 PD(γ,μth) =1−F(α)λmax(κμth;γ) (12) PF(μth) =1−F(α)λmax(κμth;0). (13)

In general, deriving a functional relationship between and by eliminating the parametric dependency on is an arduous task. However, when admits zero, we can obtain an explicit relationship between them as shown in the following corollary.

###### Corollary 3

Let us suppress the parameters , and represent the detection and false alarm probabilities, respectively as and . Then, when , we have the following functional relationship between and

 PD=1−1−PF(1+γ−γ[1−PF]1/mp)p. (14)

From the above relation, taken as a function of , we can easily see that, for , . This conforms the common observation that the SNR is positively correlated with the detection probability for a fixed value of .

The ROC curves corresponding to different parameter settings are shown in Figs. 1 and 2. The ROC of the maximum eigenvalues is shown in Fig. 1 for different SNR values. The ROC improvement with the increasing SNR is clearly visible in Fig. 1. The next important frontier which affects the ROC profile is the dimensionality of the matrices. Therefore, let us now numerically investigate the effect of the matrix dimensions on the ROC profile. To this end, Fig. 2 shows the effect of for . As can be seen, the disparity between and improves the ROC profile. The reason behind this observation is that the quality of the sample covariance matrix is improved when the length of the data record (i.e.,) increases in comparison with the dimensionality of the receiver (i.e., ). Since the minimum requirement for to be invertible is , we can observe the worst ROC performance corresponds to .

The joint effect of and is characterized with respect to the scenario where and both vary such that is constant. After some algebra, we conclude that attains its maximum at (), where

   ⎷−ln(1−PF)−2νln(γ+1γ+2)

Having obtained the upper and lower bounds on , a good approximation of can be written as444In general any convex combination of the upper and lower bounds can be a candidate for the .

 p∗≈12⎛⎜ ⎜⎝  ⎷−ln(1−PF)−νln(γ+2γ+4)+  ⎷−ln(1−PF)−2νln(γ+1γ+2)⎞⎟ ⎟⎠. (16)

The above process suggests us that when and diverge such that their ratio approaches a certain limit, the maximum eigenvalue gradually loses its power.

To further highlight the accuracy of the proposed approximation, in Fig. 3 we compare the optimal ROC profiles evaluated based on (16) and by numerically optimizing (14). As can be seen from the figure, the disparity between the proposed approximation and the exact optimal solution is insignificant. Therefore, when , under the second scenario, we can choose as per (16) for fixed , , and in view of maximizing the detection probability.

## V Conclusion

This paper investigates the signal detection problem in colored noise with unknown covariance matrix. In particular, we focus on detecting the presence of a signal using the maximum generalized eigenvalue of so called whitened sample covariance matrix. Therefore, the performance of this detector amounts to determining the statistics of the maximum eigenvalue of the deformed JUE. To this end, following the powerful orthogonal polynomial approach, we have developed a new expression for the c.d.f. of the maximum eigenvalue of the deformed JUE. We then use this new c.d.f. expression to determine the ROC of the detector. It turns out that, for a fixed SNR, when (i.e., the dimensionality of the detector), (i.e., the number of noise-only samples), and (i.e., the number of signal-plus-noise samples) increase over finite values such that and is constant, we obtain an optimal ROC profile corresponding to specific , and values. Therefore, in the above setting, when , and

increase asymptotically, the maximum eigenvalue gradually loses its detection power. This is not surprising, since under the above asymptotic setting, the detector operates below the so called phase transition where the maximum eigenvalue has no detection power.

## References

• [1] R. R. Nadakuditi and A. Edelman, “Sample eigenvalue based detection of high-dimensional signals in white noise using relatively few samples,” IEEE Trans. Signal Process., vol. 56, no. 7, pp. 2625–2638, Jul. 2008.
• [2] R. R. Nadakuditi and J. W. Silverstein, “Fundamental limit of sample generalized eigenvalue based detection of signals in noise using relatively few signal-bearing and noise-only samples,” IEEE J. Sel. Topics Signal Process., vol. 4, no. 3, pp. 468–480, Jun. 2010.
• [3] P. Bianchi, M. Debbah, M. Maida, and J. Najim, “Performance of statistical tests for single-source detection using random matrix theory,” IEEE Trans. Inf. Theory, vol. 57, no. 4, pp. 2400–2419, Apr. 2011.
• [4] R. Couillet and W. Hachem, “Fluctuations of spiked random matrix models and failure diagnosis in sensor networks,” IEEE Trans. Inf. Theory, vol. 59, no. 1, pp. 509–525, Jan. 2013.
• [5] N. Asendorf and R. R. Nadakuditi, “Improved detection of correlated signals in low-rank-plus-noise type data sets using informative canonical correlation analysis (ICCA),” IEEE Trans. Inf. Theory, vol. 63, no. 6, pp. 3451–3467, Jun. 2017.
• [6] P. Bianchi, M. Debbah, M. Maida, and J. Najim, “Performance of statistical tests for single-source detection using random matrix theory,” IEEE Trans. Inf. Theory, vol. 57, no. 4, pp. 2400–2419, Apr. 2011.
• [7] J. Baik, G. B. Arous, and S. Péché, “Phase transition of the largest eigenvalue for non-null complex sample covariance matrices,” Ann. Probab., vol. 33, no. 5, pp. 1643–1697, 2005.
• [8] J. Baik and J. W. Silverstein, “Eigenvalues of large sample covariance matrices of spiked population models,” J. Multivariate Anal., vol. 97, no. 6, pp. 1382–1408, 2006.
• [9] E. Maris, “A resampling method for estimating the signal subspace of spatio-temporal EEG/MEG data,” IEEE Trans. Biomed. Eng., vol. 50, no. 8, pp. 935–949, Aug 2003.
• [10] J. Vinogradova, R. Couillet, and W. Hachem, “Statistical inference in large antenna arrays under unknown noise pattern,” IEEE Trans. Signal Process., vol. 61, no. 22, pp. 5633–5645, Nov. 2013.
• [11] S. Hiltunen, P. Loubaton, and P. Chevalier, “Large system analysis of a GLRT for detection with large sensor arrays in temporally white noise,” IEEE Trans. Signal Process., vol. 63, no. 20, pp. 5409–5423, Oct. 2015.
• [12] I. M. Johnstone and B. Nadler, “Roy’s largest root test under rank-one alternatives,” Biometrika, vol. 104, no. 1, pp. 181–193, 2017.
• [13] P. Dharmawansa, B. Nadler, and O. Shwartz, “Roy’s largest root under rank-one alternatives: The complex valued case and applications,” arXiv:1411.4226 [math.ST], Nov. 2014.
• [14] P. Dharmawansa, I. M. Johnstone, and A. Onatski, “Local asymptotic normality of the spectrum of high-dimensional spiked F-ratios,” arXiv:1411.3875 [math.ST], Nov. 2014.
• [15] Q. Wang and J. Yao, “Extreme eigenvalues of large-dimensional spiked Fisher matrices with application,” Ann. Statist., vol. 45, no. 1, pp. 415–460, Feb. 2017.
• [16] P. Koev and I. Dumitriu, “Distribution of the extreme eigenvalues of the complex Jacobi random matrix ensemble,” SIAM. J. Matrix Anal. & Appl., 2005.
• [17] I. Dumitriu, “Smallest eigenvalue distributions for two classes of -Jacobi ensembles,” J. Math. Phys., vol. 53, no. 10, p. 103301, Oct. 2012.
• [18] P. Dharmawansa and I. M. Johnstone, “Joint density of eigenvalues in spiked multivariate models,” Stat, vol. 31, pp. 240–249, Jul. 2014.
• [19] A. Onatski, “Detection of weak signals in high-dimensional complex-valued data,” Random Matrices: Theory and Applications, vol. 03, no. 01, p. 1450001, 2014.
• [20] D. Passemier, M. R. McKay, and Y. Chen, “Hypergeometric functions of matrix arguments and linear statistics of multi-spiked Hermitian matrix models,” J. Multivar. Anal., vol. 139, pp. 124–146, 2015.
• [21] M. Y. Mo, “The rank 1 real Wishart spiked model,” Commun. Pure Appl. Math., no. 65, pp. 1528–1638, Nov. 2012.
• [22] D. Wang, “The largest eigenvalue of real symmetric, Hermitian and Hermitian self-dual random matrix models with rank one external source, part I,” J. Stat. Phys., vol. 146, no. 4, pp. 719–761, 2012.
• [23] R. J. Muirhead, Aspects of Multivariate Statistical Theory.   John Wiley & Sons, 2009, vol. 197.
• [24] K. V. Mardia, J. T. Kent, and J. M. Bibby, Multivariate Analysis.   Academic Press, London, 1979.
• [25] M. L. Mehta, Random Matrices.   Academic Press, 2004, vol. 142.
• [26] A. T. James, “Distributions of matrix variates and latent roots derived from normal samples,” The Annals of Mathematical Statistics, pp. 475–501, 1964.
• [27] L. C. Andrews, Special Functions of Mathematics for Engineers.   SPIE Press, 1998.
• [28]

C. G. Khatri, “On the moments of traces of two matrices in three situations for complex multivariate normal populations,”

Sankhy, vol. 32, pp. 65–80, 1970.