Suppose we observe independent and identically distributed (i.i.d.) complex-valued
-variate random vectorswith mean and positive definite covariance matrix
. The (unbiased) estimators ofand are the sample covariance matrix (SCM) and the sample mean defined by
The SCM is an integral part of many statistical signal processing methods such as adaptive filtering (Wiener and Kalman filters), spectral estimation and array processing (MUSIC algorithm, Capon beamformer)[1, 2], and adaptive radar detectors [3, 4, 5].
In signal processing applications, a typical assumption would be to assume that the data follow a (circular) complex multivariate normal (MVN) distribution , denoted by . However, a more general assumption would be to assume a Complex Elliptically Symmetric (CES) [7, 8] distribution, which is a family of distributions including the MVN distribution as well as heavier-tailed distributions such as the -, -, and the inverse Gaussian distribution that are commonly used in radar and array signal processing applications as special cases [9, 10, 8, 11].
In the paper, we study the complex-valued (unbiased) SCM for which we derive the variance-covariance matrix as well as the theoretical mean squared error (MSE) when sampling from CES distributions. We also provide a general expression for the variance-covariance matrix of any affine equivariant matrix-valued statistic (of which the SCM is a particular case). The results regarding the SCM extend the results in  to the complex-valued case, where the variance-covariance matrix and MSE of the SCM was derived for real-valued elliptical distributions.
The structure of the paper is as follows. Section II introduces CES distributions. In Section III, we derive the variance-covariance matrix of any affine equivariant matrix-valued statistic when sampling from a CES distribution. In Section IV, we derive the variance-covariance matrix of the SCM. All proofs are kept in the appendix.
: The identity matrix is denoted byand the vector of ones is denoted by . The Euclidean basis vector whose th coordinate is one and other coordinates are zero is denoted by . The notations , , and , denote the complex conjugate, the transpose, and the conjugate transpose, respectively. The notations , , and denote the sets of Hermitian, Hermitian positive semidefinite, and Hermitian positive definite
dimensional matrices, respectively. For a random matrix, we use the shorthand notation and (see Section III for the definition of ), where is a vectorization of . When there is a possibility for confusion, we denote by or the covariance and expectation of a sample from an elliptical distribution with mean vector and covariance matrix . The commutation matrix  is defined by , where is the Kronecker product. We frequently use the identities: , , and , where , , , and are matrices of appropriate dimensions. The notation reads “has the same distribution as”. The notation
denotes the uniform distribution on the complex unit sphere. Lastly, .
Ii Complex elliptically symmetric distributions
A random vector is said to have a circular CES distribution if and only if it admits the stochastic representation
where is the mean vector, is the unique Hermitian positive definite square-root of , , and
is a positive random variable called themodular variate. Furthermore and
are independent. If the cumulative distribution function of
is absolutely continuous, the probability density function exists and is up to a constant of the form
where is the density generator. We denote this case by . We assume that has finite fourth-order moments, and thus we can assume without any loss of generality that is equal to the covariance matrix . This implies that verifies , where . As a consequence of circularity, for , we have with and . Consequently,
is skew-symmetric. We refer the reader to for a comprehensive account on CES distributions.
The elliptical kurtosis of a CES distribution is defined as
Elliptical kurtosis shares properties similar to the kurtosis of a circular complex random variable. Specifically, if , then . This follows by noticing that , and hence and consequently in the Gaussian case. The kurtosis of a complex circularly symmetric random variable is defined as
Lastly, we define the scale and sphericity parameters
The scale is equal to the mean of the eigenvalues. The sphericity measures how close the covariance matrix is to a scaled identity matrix. The sphericity parameter gets the valuefor the scaled identity matrix and for a rank one matrix.
Iii Radial distributions and covariance matrix estimates
In this section, we derive the variance-covariance matrix of any affine equivariant matrix-valued statistic.
We begin with some definitions. The covariance and pseudo-covariance  of complex random vectors and are defined as
and together they provide complete second-order description of associations between and . Then and are called the covariance matrix and the pseudo-covariance matrix  of .
A random Hermitian () matrix is said to have a radial distribution if for all unitary matrices (so ). The following result extends the result of  to the complex-valued case.
Let a random matrix have a radial distribution with finite second-order moments. Then, there exist real-valued constants and with and such that with and
where and for all .
A statistic based on an data matrix of observations on complex-valued variables is said to be affine equivariant if
holds for all and . Suppose that is a random sample from a CES distribution and that is an affine equivariant statistic. Then has a stochastic decomposition
where denotes the value of based on a random sample from a spherical distribution . Affine equivariance together with the fact that for all unitary matrices indicate that has a radial distribution. This leads to Theorem 2 stated below.
Let be an affine equivariant statistic with finite second-order moments, and based on a random sample from a CES distribution . Then with , and
where and .
There are many statistics for which this theorem applies. Naturally, a prominent example is the SCM, which we examine in detail in the next section. Other examples are the weighted sample covariance matrices
where and . For instance, these include the complex -estimates of scatter discussed in . In the special case, when , we obtain the fourth moment matrix, which is used in the FOBI (fourth-order blind identification) method  for blind source separation, and in Invariant Coordinate Selection (ICS) .
Iv Variance-covariance of the SCM
We now use Theorem 2 to derive the covariance matrix and the pseudo-covariance matrix as well as the MSE of the SCM when sampling from a CES distribution. This result extends [12, Theorem 2 and Lemma 1] to the complex case.
Let the SCM be computed on an i.i.d. random sample from a CES distribution with finite fourth-order moments and covariance matrix . Then, the covariance matrix and pseudo-covariance matrix of are as stated in (11) and (12) with
where is the elliptical kurtosis in (4). The MSE is given by
and the normalized MSE is
where is the sphericity parameter defined in (6).
Consider the simple shrinkage covariance matrix estimation problem,
Since the problem is convex, we can find as solution of which yields
where we used . As can be noted from (14), the optimal scaling term is always smaller than 1 since . Note that, is a function of and via (13). Next we show that the oracle estimator is uniformly more efficient than the SCM, i.e., for any . First note that
where the last identity follows from fact that due to (14). Since for all , it follows that is more efficient than . Efficiency in the case when and , and hence need to be estimated, remains (to the best of our knowledge) an open problem.
Consider the univariate case (), so that is equal to the variance of the random variable and the SCM reduces to the sample variance defined by In this case, , and the optimal scaling constant in (14) becomes
A similar result was noticed in 
for the real-valued case. If the data is from a complex normal distribution), then , and , and hence , which equals the Maximum Likelihood Estimate (MLE) of . In the real case, the optimal scaling constant is for Gaussian samples . Note that when the kurtosis is large and positive and is small, the can be substantially less than one and the gain of using can be significant.
We derived the variance-covariance matrix and the theoretical mean squared error of the sample covariance matrix when sampling from complex elliptical distributions with finite fourth-order moments. We also derived the form of the variance-covariance matrix for any affine equivariant matrix-valued statistics. We presented illustrative examples of the formulas in the context of shrinkage covariance estimation.
-a Proof of Theorem 1
The proof follows the same lines as the proof in  for the real-valued case.
Since has a radial distribution
is obvious. For any unitary matrix, we have
Let be a basis for the set of matrices. Then
where . By choosing (where is the imaginary unit) and for some , we must have unless , , or . Denote , and . Then
Note that and . Furthermore,
From the last inequality, we must have and follows.
Regarding the pseudo-covariance, for any unitary ,
where . By choosing and for some , we must have except when , , or . Let , and . Then,
by similar arguments as with . Then note that, and . Lastly, since is positive semidefinite and ,
-B Proof of Theorem 2
-C Proof of Theorem 3
The proof here is similar to the proof of [12, Theorem 2] that was derived for real-valued observations. First we recall that the SCM has representation , where is the centering matrix.
Write and for . Then note that . Hence,
Then note that
Recall that has a stochastic representation , where is independent of . Thus we can write and similarly for . The th element of the th block (i.e., the th element) of the matrix is then
where we used that . Then note that
while all other moments up to fourth-order vanish. This and the fact that due to (4), allows us to conclude that the only non-zero elements of are
where we used and