I Introduction
Random matrix ensembles have found wide applications in fields of wireless communications and signal processing [1, 2, 3, 4]. Despite the fact that most studied Gaussian measurement ensembles offer trackable analyses and appealing results [5, 6, 7], they are of somewhat limited use in practical applications because the design of measurement matrices is usually subject to physical or other constraints provided by a specific system architecture. It is desirable to explore random matrix ensembles with hidden structure from a computational and an applicationoriented point of view.
Coherence has been utilized to measure the quality of the measurement matrix [8]
. Analysis of coherence statistics of random vectors/matrices plays an important role in solving a series of signal processing problems including the Grassmannian line packing
[9, 10], random vector quantization [11, 12], and support detection (SD) [5, 13, 14, 15]. In particular, the performance of SD considerably varies with the characteristics of measurement matrices. There is a certain class of random matrix ensembles with hidden structures that can demonstrate an improvement in SD performance guarantees compared to Gaussian ensembles [5]. Distinguished from the Gaussian measurement matrix that does not contain hidden constraints, the random phaserotated (RPR) measurement matrix, where each entry is drawn from the constant modulus uniform phase rotation distribution, brings the benefits of maintaining unitnorm columns and constant modulus entries of the measurement matrix. This measurement ensemble has been utilized in advanced beamforming and precoding for wireless communications [16, 17].In this paper, we calculate high probability bounds on the coherence statistics of RPR measurement matrices and apply them to obtain SD performance guarantees for orthogonal matching pursuit (OMP), which is a lowcomplexity, greedy approach for SD
[18, 5]. The performance bound is in terms of the required number of measurements for any given number of supports and system dimensions. A free variable is introduced, which is optimized to further tighten the performance bound. The main motivation is that previous work relying on the coherence property did not contain hidden constraints that are suitable for SD of OMP. Numerical evaluations demonstrate that the analyzed SD performance guarantee of OMP is tight, especially when the signal is sparse.Ii Coherence Statistics
Suppose a random measurement matrix with being the th column of . Each entry of is constant modulus and drawn from the random phase rotation variable as
(1) 
where denotes the th row and th column entry of , , , and the phase
is an independent and identically distributed (i.i.d.) uniform random variable, i.e.,
. With the construction in (1), maintains , .The coherence of is the maximum absolute correlation between two distinct columns of [19], which is given by
(2) 
where denotes the conjugate transpose. Characterizing the distribution of is of interest  however, it is challenging to directly derive the distribution of when follows (1
). To circumvent this difficulty, we relegate to find a lower bound on the cumulative distribution function (CDF) of
instead. We start by building a connection between the vector drawn from the distribution in (1) and the vector consisting of Bernoulli random variables.Lemma 1.
Let and be random vectors with i.i.d. entries , , and with equal probability for , respectively. Then, for any unitnorm vector , the following inequality holds
(3) 
where has each entry , , is a nonnegative integer, and the expectations are taken over and , respectively.
Proof: See Appendix A.
Based on Lemma 1, we characterize a bound on the distribution of below.
Lemma 2.
Suppose the vectors and defined in Lemma 1. Then, for any , the following inequality holds
(4) 
Proof: See Appendix B.
Remark 1.
A lower bound on the CDF of in (2) can be found.
Theorem 1.
Suppose a matrix consisting of i.i.d. entries , , , . Then, the following holds for ,
(5) 
Proof.
The inner product between two distinct column vectors of satisfies
(6) 
where ,
, is the difference between two independent uniform random variables, whose probability density function is given by
In (6), follows the same definition in Lemma 1, and we use the fact that and , in which is the modulo of . Note that and it verifies that has the same distribution as in (6), where is the equality in distribution.
By Lemma 2, we now have . Then, the maximum order statistic of is lower bounded by
This completes the proof. ∎
Iii Support Detection Bounds for OMP
In this section, the coherence statistics of RPR measurement matrices are applied to obtain the probability bounds of SD for OMP.
Iiia Measurement Model and OMP Algorithm
Suppose a measurement model
(7) 
where each entry of follows (1). Here, the assumption is that the number of measurements is smaller than the signal dimension , i.e., . The signal in (7) has nonzero elements (supports) whose indexes are defined by the support set
(8) 
where . The goal is to detect the support set from the measurement in (7).
An iterative procedure of OMP for SD is depicted in Algorithm 1 for the measurement model in (7). To make sure that the active index determined in Step 4 is a true support, the following sufficient condition [19] should be met,
(9) 
where is the submatrix formed by taking the columns of indexed by and is the complementary submatrix of . The nonzero coefficients estimated in Step 7 are formed by extracting the nonzero elements of indexed by and given by . It is crucial to recognize that the updated residual is orthogonal to the columns of . The OMP detects one support at each iteration and runs for exactly iterations.
IiiB Support Detection Performance Guarantee
We provide the SD performance guarantee of the OMP in Algorithm 1 as follows.
Theorem 2.
Proof: See Appendix C.
To further tighten the lower bound in (11), we optimize the free variable by minimizing the right hand side (r.h.s.) of (11) such that
(12) 
Theorem 3.
Proof: See Appendix D.
Iv Numerical Simulations
To verify the SD performance guarantee in (11), we perform Monte Carlo simulations in Fig. 1, where the probability of SD error, i.e., , across different numbers of measurements for and , is evaluated. In the simulation, the signal is generated by randomly choosing supports with each support having , for , and we compare with the existing coherencebased SD performance guarantee for the Gaussian random measurement matrix [5]. In Fig. 1, the vertical lines denote the minimum required to guarantee the SD error rate , where these values are given by the r.h.s. of (11) for the RPR measurements , and for the Gaussian case [5], respectively. Seen from Fig. 1, the obtained SD performance guarantee of RPR matrices provides a tighter characterization than the Gaussian case when the signal is sparse, i.e., is small.
V Conclusion and Discussion
The coherence statistics of RPR matrices were analyzed and applied to obtain the SD performance guarantees of OMP. The introduced free variable was optimized to further tighten the SD bound. Numerical simulations corroborated the theoretical findings and revealed that including the constant modulus and unitnorm structure for random measurement ensembles is desirable for SD using OMP.
In this work, we focused on the coherence statistics of RPR matrices to show the SD performance guarantees of OMP. In particular, we proved that OMP can achieve SSD with high probability, provided RPR measurements. It is of interest to compare our coherencebased analysis with the restricted isometry property (RIP)based result since they are two main techniques in analyzing the performance guarantees of SD for OMP. By using the concentration inequality in [4, Theorem 2] and the method of proving the RIP for random matrices in [22, Theorem 5.2], one can obtain that is sufficient for the RPR matrices to satisfy the RIP with high probability, where is the restricted isometry constant. With being a strict condition of SSD for OMP [23], the RIPbased SD bound can be given by , which is on par with our coherencebased results in Theorem 2.
Finally, one limitation of the work is that the SD bound becomes loose as grows. Seen from Fig. 1, there is still room for further improvement by investigating a new structure of random measurement ensembles, which is subject to future research.
Appendix A Proof of Lemma 1
Proof.
The left hand side (l.h.s.) and r.h.s. of (3) can be rewritten as and , respectively, where , , with equal probability. Thus, showing the inequality in (3) is equivalent to showing
(14) 
The l.h.s. of (14) can be simplified as
(15) 
where follows from the equality , is due to the fact that for , and holds because . Expanding the r.h.s. of (14) leads to
(16)  
where . The inequality in (16) becomes the equality only if because for . On the other hand, when , the strict inequality in (16) holds because for any positive integers , , leading to for . Combining (15) and (16) results in (14). ∎
Appendix B Proof of Lemma 2
Proof.
By using Markov’s inequality, we have for ,
(17) 
The term in (17) can further be upper bounded for by
(18) 
where the first inequality is due to the Taylor series expansion of and Lemma 1 applied to , and follows the same definition in Lemma 1. The last step in (18) follows from the inequality for in [24, Lemma 5.2].
Appendix C Proof of Theorem 2
Proof.
The proof is inspired by a similar theorem in [5, Theorem 6] and refines the results for the RPR measurement ensembles in conjunction with Lemma 2 and Theorem 1. We first elaborate two events: 1) is defined on the basis of the condition in (9) as ; and 2) The event that is bounded by , i.e., . The event is to restrict the on a special class of to ease the bound analysis below.
Conditioned on the event , the probability of SSD can be lower bounded by
(20) 
From Theorem 1, in (20) can be lower bounded by
(21)  
where . The conditional probability on the r.h.s. of (20) can be lower bounded by
(22)  
where is due to the inequality for , comes from where and because by applying Gershgorin disc theorem [25], holds due to the fact that the columns of are independent, and is due to Lemma 2.
Appendix D Proof of Theorem 3
Proof.
We first claim that the objective function in (12) is convex for . To show this, we check the secondorder condition , where is the secondorder derivative of . After some algebraic manipulations, the first and secondorder derivatives of can be written, respectively, as and . Because for , is convex.
The optimality condition of (12) can now be described by using the firstorder optimality condition as
(23) 
Let , equivalently . Then, by (12), the equality in (23) can be rewritten as . This yields , which follows from the definition of the lower branch of the Lambert function and [21]. Now, by the equality , we finally have (13). This completes the proof. ∎
References
 [1] Y. Liang, G. Pan, and Z. D. Bai, “Asymptotic performance of mmse receivers for large systems using random matrix theory,” IEEE Transactions on Information Theory, vol. 53, no. 11, pp. 4173–4190, Nov 2007.

[2]
R. Menon, P. Gerstoft, and W. S. Hodgkiss, “Asymptotic eigenvalue density of noise covariance matrices,”
IEEE Transactions on Signal Processing, vol. 60, no. 7, pp. 3415–3424, July 2012.  [3] K. Elkhalil, A. Kammoun, T. Y. AlNaffouri, and M. Alouini, “Measurement selection: A random matrix theory approach,” IEEE Transactions on Wireless Communications, vol. 17, no. 7, pp. 4899–4911, July 2018.
 [4] W. Zhang, T. Kim, D. J. Love, and E. Perrins, “Leveraging the restricted isometry property: Improved lowrank subspace decomposition for hybrid millimeterwave systems,” IEEE Transactions on Communications, vol. 66, no. 11, pp. 5814–5827, Nov 2018.
 [5] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655–4666, Dec 2007.
 [6] A. K. Fletcher and S. Rangan, “Orthogonal matching pursuit: A brownian motion analysis,” IEEE Transactions on Signal Processing, vol. 60, no. 3, pp. 1010–1021, March 2012.
 [7] N. Lee, “Map support detection for greedy sparse signal recovery algorithms in compressive sensing,” IEEE Transactions on Signal Processing, vol. 64, no. 19, pp. 4987–4999, Oct 2016.
 [8] D. L. Donoho and X. Huo, “Uncertainty principles and ideal atomic decomposition,” IEEE Transactions on Information Theory, vol. 47, no. 7, pp. 2845–2862, Nov 2001.
 [9] D. J. Love, R. W. Heath, and T. Strohmer, “Grassmannian beamforming for multipleinput multipleoutput wireless systems,” IEEE Transactions on Information Theory, vol. 49, no. 10, pp. 2735–2747, Oct 2003.
 [10] K. K. Mukkavilli, A. Sabharwal, E. Erkip, and B. Aazhang, “On beamforming with finite rate feedback in multipleantenna systems,” IEEE Transactions on Information Theory, vol. 49, no. 10, pp. 2562–2579, Oct 2003.
 [11] N. Jindal, “Mimo broadcast channels with finiterate feedback,” IEEE Transactions on Information Theory, vol. 52, no. 11, pp. 5045–5060, Nov 2006.
 [12] C. K. Auyeung and D. J. Love, “On the performance of random vector quantization limited feedback beamforming in a miso system,” IEEE Transactions on Wireless Communications, vol. 6, no. 2, pp. 458–462, Feb 2007.
 [13] Z. BenHaim, Y. C. Eldar, and M. Elad, “Coherencebased performance guarantees for estimating a sparse vector under random noise,” IEEE Transactions on Signal Processing, vol. 58, no. 10, pp. 5030–5043, Oct 2010.
 [14] A. Bracher, G. Pope, and C. Studer, “Coherencebased probabilistic recovery guarantees for sparsely corrupted signals,” in 2012 IEEE Information Theory Workshop, Sep. 2012, pp. 307–311.
 [15] E. Malhotra, K. Gurumoorthy, and A. Rajwade, “Stronger recovery guarantees for sparse signals exploiting coherence structure in dictionaries,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), March 2017, pp. 6085–6089.
 [16] S. Hur, T. Kim, D. J. Love, J. V. Krogmeier, T. A. Thomas, and A. Ghosh, “Millimeter wave beamforming for wireless backhaul and access in small cell networks,” IEEE Transactions on Communications, vol. 61, no. 10, pp. 4391–4403, Oct 2013.
 [17] T. Kim and D. J. Love, “Virtual AoA and AoD estimation for sparse millimeter wave MIMO channels,” in 2015 IEEE 16th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), June 2015, pp. 146–150.
 [18] T. T. Cai and L. Wang, “Orthogonal matching pursuit for sparse signal recovery with noise,” IEEE Transactions on Information Theory, vol. 57, no. 7, pp. 4680–4688, July 2011.
 [19] J. A. Tropp, “Greed is good: algorithmic results for sparse approximation,” IEEE Transactions on Information Theory, vol. 50, no. 10, pp. 2231–2242, Oct 2004.

[20]
J. A. Tropp, “An introduction to matrix concentration inequalities,”
Foundations and Trends® in Machine Learning
, vol. 8, no. 12, pp. 1–230, 2015.  [21] I. S. Gradshteyn and I. M. Ryzhik, Table of integrals, series, and products, 7th ed. Elsevier/Academic Press, Amsterdam, 2007.
 [22] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, “A simple proof of the restricted isometry property for random matrices,” Constructive Approximation, vol. 28, no. 3, pp. 253–263, Dec 2008.
 [23] M. A. Davenport and M. B. Wakin, “Analysis of orthogonal matching pursuit using the restricted isometry property,” IEEE Transactions on Information Theory, vol. 56, no. 9, pp. 4395–4401, Sept 2010.
 [24] D. Achlioptas, “Databasefriendly random projections: Johnsonlindenstrauss with binary coins,” J. Comput. Syst. Sci., vol. 66, no. 4, pp. 671–687, June 2003.
 [25] G. H. Golub and C. F. V. Loan, Matrix Computations (3rd Ed.). Baltimore, MD, USA: Johns Hopkins University Press, 1996.