1 Introduction
Principal component analysis (PCA) is an important dimension reduction tool which finds a low dimensional subspace maximizing the explained variation in data. In high dimensional settings, a collection of observations can be thought of as a linear combination of a small number of source signals plus noise, and PCA can be used to approximately recover the unknown signals. That is, each observation vector
can be modeled by(1.1) 
where is a mean vector, is a matrix representing source signals in its columns, is an dimensional random vector, and is a dimensional vector of noise. Recent work based on this model (1.1) includes Kritchman and Nadler (2008); Passemier and Yao (2012); Ma et al. (2013); Shabalin and Nobel (2013); Choi et al. (2014); Yao et al. (2015); Fan and Wang (2015). For instance, in a signal detection model, can be a vector of the recorded signals with noise at a certain time, where the columns of are unknown source signals, and the ’s are emission levels of these signals. See e.g. Section 11.6 of Yao et al. (2015). In econometrics, can be the returns of stocks at a certain time, where A is a matrix of latent common factors and the ’s are unobservable random factors (Onatski, 2012; Ma et al., 2013; Fan and Wang, 2015). From now on, without loss of generality, we assume that the mean vector is zero, i.e. , by subtracting the sample mean.
In many related works, the dimensional noise vector in (1.1) is modeled by where is the noise level and is a
dimensional vector of i.i.d. white noise. Also, it is often assumed that
and are independent. Then, the covariance matrix of becomesLet
denote the eigenvalues of
. Since the rank of is at most , the eigenvalues of are(1.2) 
where and the has the spectral decomposition . In this paper, we denote the large eigenvalues in (1.2) by for convenience, i.e.
(1.3) 
with . This model is called the spiked covariance model (Johnstone, 2001). The eigenvalues correspond to spikes and they are mostly assumed to be much larger than the nonspike eigenvalues. The directions corresponding to spikes can be considered as important underlying structure of the data set whereas the other directions corresponding to nonspikes can be considered as noise.
One challenging problem in PCA is how to determine the number of spikes . The scree plot is one of the popular ways that have been proposed. Based on the plot of ordered eigenvalues, one looks for an elbow that distinguishes the eigenvalues that are remarkably large from relatively small eigenvalues. Although this method is very simple, looking for an elbow is often subjective and it may be unclear that the elbow really gives the meaningful separation. Moreover, when one has a large number of datasets to be analyzed, it is hard to look at all scree plots corresponding to each dataset. For example, when applying PCA to RNAseq data for many separate genes which is a main interest in the paper, visual inspection would entail looking at more than 20,000 scree plots and deciding on elbows. This is obviously intractable.
Methods for determining the number of spikes in PCA have been developed in different fields including statistics (Besse and de Falguerolles, 1993; Krzanowski and Kline, 1995; Choi et al., 2014), signal processing (Wax and Kailath, 1985; Kritchman and Nadler, 2008, 2009), and econometrics (Harding et al., 2007; Passemier and Yao, 2012, 2014). For excellent background, see Chapter 6 of Jolliffe (2002). As a related problem of a low rank matrix construction based on SVD has been studied in Wongsawat et al. (2005); Shabalin and Nobel (2013); Nadakuditi (2014). Most of the previous works assumed that the population eigenvalues corresponding to nonspikes are all equal as in (1.3). Under (1.3), the empirical distribution of nonspike sample eigenvalues, that is the sample eigenvalues except for a few first large ones, should be close to the classical MarcenkoPastur (MP) distribution, which is the limiting spectral distribution (LSD) of when and both tend to infinity such that . However, as in Figure 1
, we have observed that nonspike sample eigenvalues from RNAseq data do not follow the classical MP distribution. They rather show even more heavy tails and right skewness. Figure
1 compares the empirical spectral distribution (ESD) from the RNAseq data at the gene CDKN2A after excluding a few extremely large eigenvalues with the theoretical MP distribution based on where andare the same dimension and sample sizes of the data. The noise variance,
is estimated using the remaining sample eigenvalues. For better comparison, the ESD on the same xaxis as the MP distribution is also given in a small box. Clearly, the ESD from the data is extremely rightskewed and this is even true when we remove more large eigenvalues. As shown in Section 4, due to such extreme positive skewness in the spectral distribution, most existing methods choose hundreds of spikes in RNAseq data or often fail to determine the number of spikes by providing all PCs as spikes. In Section 4, we show that the white noise model that is assumed in the existing methods is not appropriate for the RNAseq data and propose a new noise model which successfully models the ESD of the data providing even better fit by capturing the extreme rightskewness. Furthermore, we propose a new algorithm based on the proposed noise model, which provides a reasonable number of principal components (PC) and thus enables us to use the chosen PCs in downstream statistical analyses.2 Known results on the generalized spike covariance model
In this section, we review some important results of random matrix theory. Let
be i.i.d. random variables satisfying
and let be a sequence of nonnegative definite Hermitian matrices. Write where and . Then, the sample covariance matrix of can be expressed as . Let be a sequence of the empirical spectral distributions of with the dimension to sample size ratio, , and assume that
weakly converges to a nonrandom probability distribution
on as tends to infinity satisfying where is a positive constant. The limit distribution is referred to as the population spectral distribution (PSD). For the simplest case where , the ESDs of are for all and thus the PSD can be obtained as . The classical MarcenkoPastur law says that the ESD of converges to a nonrandom limiting spectral distribution (LSD) under this case.A generalized version of the classical MarcenkoPastur law has been developed (Silverstein, 1995) when the PSD is an arbitrary probability measure. The PSD and the LSD are linked by the following inverse map of the companion Stieltjes transform of the LSD ,
which is called the Silverstein equation. Let be the distribution whose Stieltjes transform is . Throughout the proposal, we call the MarcenkoPastur (MP) distribution with indexes . A lot of theory and applications in the spectral analysis have been established from this equation. One crucial result is the Lemma 2.1 which indicates the analytical relationship between the two supports of the PSD and the MP distribution (Silverstein and Choi, 1995). Define
(2.1) 
for and .
Lemma 2.1.
(Silverstein and Choi, 1995) If , then and satisfies

and (so that is welldefined);

.
Conversely, if satisfies (a)(b), then .
Roughly speaking, what the lemma says is that given the PSD one can characterize the support of the MP distribution as illustrated in Figure 2. In Figure 2, the curve indicates the with and the PSD and the regions indicated by yellow are satisfying (a) and (b) in Lemma 2.1 and the corresponding . According to Lemma 2.1, the support of the MP distribution is indicated as blue on the yaxis.
Baik and Silverstein (2006) investigated the asymptotic behaviors of the sample eigenvalues corresponding to the spikes under Johnstone’s spike model. They showed that sample spike eigenvalues converge almost surely to some functions of the corresponding population spike eigenvalues and the different functions are considered for the different types of spikes. Bai and Yao (2012) extended their results to a generalized spike model where the spectrum of a base set follows an arbitrary distribution. A generalized spike model has a population covariance matrix that is of the form
where is an matrix whose eigenvalues are the population spikes, of respective multiplicity () with , and is a matrix whose eigenvalues are . Under certain conditions on the true eigenvalues, they showed the almost sure convergence of a sample spike eigenvalue to some function of the corresponding population eigenvalue as and both increase such that . Our method is motivated by a generalized spike model, which allows flexibility on nonspike population eigenvalues.
3 Methodology
In this section, we propose an algorithm to choose the number of spikes for a generalized spike model where the PSD is not constrained to be the point mass at , i.e. . The algorithm will be first described under the scenario where the PSD is known, which is unrealistic in most cases but easy to understand the basic idea. Next, we will introduce the main algorithm which is applicable when the PSD is unknown. Also, we propose a graphical tool to assess the distribution assumption for the PSD. In this section, we denote the PSD by to emphasize the parameters that identify the and the eigenvalues of in decreasing order by .
3.1 Estimation when the PSD is known
Let us first consider the case when the PSD is known. Since we know the support of the corresponding LSD according to Lemma 2.1, a straightforward way to estimate the number of underlying spikes may be counting the number of eigenvalues above the upper boundary of the support. However, this procedure may slightly overestimate the true number of spikes because it ignores the variation in the largest noise eigenvalue. In the point mass PSD, for example, the resulting upper boundary for the LSD is whereas the largest eigenvalue is known to converge in distribution to the TracyWidom law whose support contains , giving a nonignorable probability of the largest eigenvalue being greater than (Johnstone, 2001; Ma, 2012)
. To reflect such variation of the largest noise eigenvalue, in this particular example, the approximate TracyWidom quantile can replace the upper boundary
and this provides a more precise estimation (Kritchman and Nadler, 2008).However, except for this simplest case (), we do not know the distribution of the largest noise eigenvalue. Thus, we approximate the distribution by simulation under a sufficiently large number of independent replications and then we take a certain quantile (e.g. 99th percentile) as a threshold above which sample eigenvalues are considered as spikes. The simulation procedure to get a level threshold, , is described in Algorithm 1.
Algorithm 1.

For with a sufficiently large number ,

Generate random variables from and get a diagonal matrix by taking ;

Generate a matrix whose entries are independent variables from ;

Get the largest eigenvalue of .


Obtain quantile based on the set .
3.2 Estimation when the PSD is unknown
In practice, the PSD is unknown and should be estimated as well. Here we consider the scenario that the type of distribution is known but with unknown parameters, e.g. the case that the PSD is with an unknown parameter . The main algorithm is based on a sequence of nested hypothesis tests:
where is the true number of spikes and . At the th stage, we estimate the PSD parameters assuming to be nonspikes and test whether or not the is from a spike based on the approximate distribution of the largest noise eigenvalue obtained by Algorithm 1 with the estimated parameters. If the null is rejected, i.e. there are at least spikes, then we proceed to the th hypothesis test after excluding . Otherwise, we stop the procedure and conclude that there are at most spikes. Note that when we consider a point mass PSD and a theoretically obtained threshold from the TracyWidom law instead of the simulated one, this procedure plays the same role as the method proposed by Kritchman and Nadler (Kritchman and Nadler, 2008) with a carefully estimated noise variance.
Estimation of unknown parameters of the PSD.An interesting topic in the random matrix theory is the estimation of the PSD parameters. (Li et al., 2013; Bai et al., 2010)
. Although we employ the Bai’s method of moments in this paper because it is intuitive and is to implement, other PSD parameter estimation methods can be used as well.
Bai et al. (2010) proposed a method to estimate the unknown parameters based on the relationship between the moments of the PSD and the moments of the MP distribution . The following lemma (Nica and Speicher, 2006) describes the relationship.Lemma 3.1.
(Nica & Speicher, 2006) The moments of the LSD are linked to the moments of the PSD by
(3.1) 
where the sum runs over the following partitions of :
and is the multinomial coefficient
We denote the estimate of obtained from the method of moments by . Note that the proposed moment estimator has been proved to be strongly consistent and asymptotically normal (Bai et al., 2010).
We employ this method of moments to estimate the PSD parameters and the main algorithm to estimate the number of spikes is described as follows:
Main algorithm.
From , iterate the following procedure until it stops.

Based on , obtain the PSD parameters .

Applying Algorithm 1 with the estimated , obtain the quantile with a predetermined level .

If , reject and go to the step 1 replacing by . Otherwise, we do not reject , the estimate of the number of spikes is , and stop here.
3.3 PSD diagnostics
The proposed method for determining the number of spikes depends on the correct specification of the underlying PSD. Under a misspecified PSD, the estimated number would be unreliable. Here, we develop a graphical tool for the diagnostic to check if the assumed PSD is correctly specified.
Let be the true companion Stieltjes transform as defined in Section 2 and let the sample companion Stieltjes transform, denoted by , be
for where denotes the ESD from the data and denotes the support of the distribution . Then, (reference  weiming) shows that, under certain conditions, (a) converges to for any ; (b) the defined in (2.1) uniquely determines the PSD . Combining (a) and (b) gives us an important fact that if the PSD of the data is truely , then the will be close to the true for where .
Based on this fact, we propose psi envelopes which carefully study the difference between and by constructing simulated envelopes, for . The basic idea is the same for QQ envelopes (Hannig et al., 2001). The envelope is from independent replications obtained as in Algorithm 1. Then, the psi envelope enables assessment of the PSD assumption by checking whether or not the is covered by the envelope.
Two examples of the psi envelope are shown in Figure 3, which are checking whether or not two different sets of eigenvalues are from the PSD . For the left plot, the sample eigenvalues truly from the are considered, and the sample eigenvalues from a different PSD for the right plot. In each plot, the function is shown in black with blue envelope and the estimated function in red. The which is based on the eigenvalues truly from the is well covered by the envelope whereas the from shows clear deviation from the envelope. As this example shows, the psi envelope provides a useful graphical tool for the PSD diagnostic.
4 Application to RNAseq data
In this section, we apply the proposed methods to our main example, RNAseq gene expression data. Let us first briefly describe the data structure. A collection of 522 head and neck squamous carcinoma RNAseq observations were obtained from the TCGA Research Network and the dataset wes processed as described in The Cancer Genome Atlas Research Network (2012). In this paper, we study the baseposition gene expression for each of several important cancer related genes. For each gene, the RNAseq readdepths are measured along the length of the transcript. The resulting data structure is an expression count matrix where is the read count at the th position for the th patient, is the length of the transcript at the gene being studied, and is the sample size. Since RNAseq counts data show unstable variations within and between observations, the shifted logarithm transformation was taken to stabilize such heterogeneity, i.e. where . For each gene, our analysis is based on the resulting matrix and the example of the gene CDKN2A is described in Figure 4.
Since RNAseq data typically have 1,00020,000 dimensions while there are only hundreds of samples available, PCA is a very useful tool to reduce a huge dimension size and visualize the underlying relationship between samples or variables. In many cases, RNAseq data are analyzed for hundreds or thousands of genes for various purposes such as discovery of key genes, detection of interesting genetic events, or identification of novel clusters, so that an automatic choice of the number of PCs at each gene is useful. To our best knowledge, however, there is no existing method to select the number of PCs that is applicable to RNAseq data. In Section 4.1, we show why the existing methods are not appropriate for the RNAseq data and suggest a new PSD model which is more suitable. And we compare our results with some existing methods for several important genes in Section 4.2.
4.1 The proposed noise model
Figure 5 shows graphics which demonstrate that the point mass PSD for a noise distribution of RNAseq data is clearly not appropriate. Here, the gene CDKN2A is considered with . In each plot, the theoretical function (black) with the estimated noise level and is compared with the estimated function from data (red) after excluding the 10, 50, 100, 200, 300, 400 largest eigenvalues sequentially. The red arrow represents the support of the empirical spectral distribution from the data, that is, from the smallest to the largest sample eigenvalue. Correspondingly, the support of the theoretical MP distribution extended to the 99th percentile of the distribution of the largest eigenvalue, known as the TracyWidom law, is represented by the blue arrow (Ma, 2012). This enables more accurate comparisons of the two distributions by taking into account the variation in the maximum eigenvalue. Although the theoretical and datadriven functions, and , as well as the corresponding supports become comparable as more eigenvalues are kicked out, it is clear that they strongly disagree even when almost all eigenvalues are elliminated. This demonstrates that the noise eigenvalues from the data do not follow the classical MP distribution, which motivates our improved PSD models.
Under the questionable whitenoise assumption, we easily expect that too many PCs would be determined to be significant because of the extreme skewness of the ESD. The severe positive skewness makes the th sample eigenvalue, the maximum of the remaining eigenvalues at the th stage, be much greater than the theoretically possible maximum eigenvalue, resulting in the rejection of the hypothesis that the th eigenvalue is from noise. In almost all cases, we observed that this was indeed true as described in Table 1. From the perspective of the dimension reduction, however, such high numbers of PCs may not be helpful especially for downstream statistical analyses that mostly require a much smaller dimension size.
Accordingly, the PSD that has a point mass at the noise variance, i.e.
, is not a suitable distribution for RNAseq data whose eigenvalues are extremely rightskewed. To capture the extreme positive skewness, we suggest a rightskewed PSD, particularly the Gamma distribution truncated at some upper quantile. Other rightskewed distributions may be used but the Gamma distribution fit the best from our experience. Also, we can control the degree of positive skewness by adjusting the truncation quantile. Our method is sensitive to this choice of the quantile and precise specification is a topic for future research. Truncation determined by the upper 0.995 quantile gave reasonable values so we use it for the rest of the paper. Figure
6 shows the psi envelopes introduced in Section 3.3 for assessing the gamma assumption after kicking out a few largest sample eigenvalues. As in Figure 5, Figure 6 shows the estimated functions in red sequentially removing a first few eigenvalues with the red vertical arrows for the supports of the ESDs. The black curves represent the theoretical functions with the black arrows indicating the approximate supports of the corresponding LSDs. As more eigenvalues are excluded up to 11, the sample psi curve gets closer to the psi envelopes, which supports that the remaining sample eigenvalues roughly follow the LSD based on the estimated PSD . As we will see in Section 4.2, the critical value 11 is exactly the estimated number of spikes from the proposed method in Section 3.4.2 Application to important genes
We compared our method (CM) with the methods proposed by Kritchman and Nadler (KN) and Passemier and Yao (PY) for the RNAseq data at several important tumorrelated genes: CDKN2A, TP53, FAT1, PTEN, CASP4, CHEK2, EGFR, and PIK3CA. Let us first briefly describe the two methods.
Method of Kritchman and Nadler (KN). Kritchman and Nadler (2008)
developed an algorithm for rank determination based on the asymptotic distribution of the largest noise eigenvalue. Their algorithm performs sequential hypothesis tests on whether the largest eigenvalue at each step arises from a signal rather than from noise. The statistical procedure at each step involves estimating noise variance and setting a threshold based on the TracyWidom distribution where the largest noise eigenvalue follows
(Johnstone, 2001). Their algorithm has been considered as a good benchmark for judging performance of other methods for determining the number of components in many papers.Method of Passemier and Yao (PY). Passemier and Yao (2012, 2014) proposed a method for estimating the number of spikes under the case where there are possibly equal spikes. Based on the different asymptotic behaviors of spike and nonspike sample eigenvalues, they determine a threshold for the successive spacings of the ordered sample eigenvalues. Because larger spacings are expected for spike sample eigenvalues than for noise eigenvalues, one may separate spikes and nonspike eigenvalues based on an appropriately determined threshold for the spacing. This method is very intuitive in the sense that the proposed procedure is somehow similar to the naive procedure based on scree plots with a more reasonable separation based on random matrix theory. To avoid false determination due to ties of spike eigenvalues, they also proposed a more robust estimator by using consecutive two or more spacings that should be larger than a threshold at the same time to be considered as spikes.
d  CM  KN ()  PY ()  

CDKN2A  1978  11 (0.019, 1.399)  521 ()  197 (0.00046) 
CHEK2  2595  5 (0.014, 2.029)  521 ()  208 (0.00034) 
CASP4  2688  6 (0.011, 3.665)  521 ()  219 (0.0039) 
PIK3CA  3686  8 (0.013, 1.310)  521 ()  521 () 
TP53  3876  12 (0.013, 1.505)  521 ()  521 () 
PTEN  5535  17 (0.011, 1.235)  521 ()  521 () 
EGFR  7965  13 (0.010, 2.434)  521 ()  521 () 
FAT1  15232  16 (0.008, 3.781)  521 ()  521 () 
The estimated number of spikes for each gene from the three methods are summarized in Table 1. As expected, both KN and PY methods determine a huge number of spikes and, in particular, the KN results indicate that all PCs are spikes for these eight genes. When the all PCs are declared as spikes, the noise variance cannot be estimated because there is no noise eigenvalue any more, as indicated by NA in Table 1. On the other hand, our proposed method provides biologically reasonable and practical number of spikes. Although we provide the results for the eight chosen genes, we have observed that this is indeed true for almost all genes. We believe that the proposed method can give valuable contribution to distinguish meaningful and important signals from noise in RNAseq data. Furthermore, the method can be also applied to other types of data set where a point mass PSD is not appropriate with a carefully chosen PSD.
References
 Bai et al. (2010) Bai, Z., Chen, J., and Yao, J. (2010). On estimation of the population spectral distribution from a highdimensional sample covariance matrix. Australian & New Zealand Journal of Statistics, 52(4):423–437.

Bai and Yao (2012)
Bai, Z. and Yao, J. (2012).
On sample eigenvalues in a generalized spiked population model.
Journal of Multivariate Analysis
, 106:167–177.  Baik and Silverstein (2006) Baik, J. and Silverstein, J. W. (2006). Eigenvalues of large sample covariance matrices of spiked population models. Journal of Multivariate Analysis, 97(6):1382–1408.
 Besse and de Falguerolles (1993) Besse, P. and de Falguerolles, A. (1993). Application of resampling methods to the choice of dimension in principal component analysis. In Computer intensive methods in statistics, pages 167–176. Springer.
 Choi et al. (2014) Choi, Y., Taylor, J., and Tibshirani, R. (2014). Selecting the number of principal components: Estimation of the true rank of a noisy matrix. arXiv preprint arXiv:1410.8260.
 Fan and Wang (2015) Fan, J. and Wang, W. (2015). Asymptotics of empirical eigenstructure for ultrahigh dimensional spiked covariance model. arXiv preprint arXiv:1502.04733.
 Hannig et al. (2001) Hannig, J., Marron, J., and Riedi, R. (2001). Zooming statistics: Inference across scales. Journal of the Korean Statistical Society, 30(2):327–345.
 Harding et al. (2007) Harding, M. C., Jorgenson, D., King, G., Linton, O., Lorenzoni, G., Mazur, B., Panageas, S., and Patel, K. (2007). Structural estimation of highdimensional factor models.
 Johnstone (2001) Johnstone, I. M. (2001). On the distribution of the largest eigenvalue in principal components analysis. Annals of statistics, pages 295–327.
 Jolliffe (2002) Jolliffe, I. (2002). Principal component analysis. Wiley Online Library.
 Kritchman and Nadler (2008) Kritchman, S. and Nadler, B. (2008). Determining the number of components in a factor model from limited noisy data. Chemometrics and Intelligent Laboratory Systems, 94(1):19–32.
 Kritchman and Nadler (2009) Kritchman, S. and Nadler, B. (2009). Nonparametric detection of the number of signals: Hypothesis testing and random matrix theory. IEEE Transactions on Signal Processing, 57(10):3930–3941.
 Krzanowski and Kline (1995) Krzanowski, W. J. and Kline, P. (1995). Crossvalidation for choosing the number of important components in principal component analysis. Multivariate Behavioral Research, 30(2):149–165.
 Li et al. (2013) Li, W., Chen, J., Qin, Y., Bai, Z., and Yao, J. (2013). Estimation of the population spectral distribution from a large dimensional sample covariance matrix. Journal of Statistical Planning and Inference, 143(11):1887–1897.
 Ma (2012) Ma, Z. (2012). Accuracy of the tracy–widom limits for the extreme eigenvalues in white wishart matrices. Bernoulli, 18(1):322–359.
 Ma et al. (2013) Ma, Z. et al. (2013). Sparse principal component analysis and iterative thresholding. The Annals of Statistics, 41(2):772–801.

Nadakuditi (2014)
Nadakuditi, R. R. (2014).
Optshrink: An algorithm for improved lowrank signal matrix denoising by optimal, datadriven singular value shrinkage.
IEEE Transactions on Information Theory, 60(5):3002–3018.  Nica and Speicher (2006) Nica, A. and Speicher, R. (2006). Lectures on the combinatorics of free probability, volume 13. Cambridge University Press.
 Onatski (2012) Onatski, A. (2012). Asymptotics of the principal components estimator of large factor models with weakly influential factors. Journal of Econometrics, 168(2):244–258.
 Passemier and Yao (2014) Passemier, D. and Yao, J. (2014). Estimation of the number of spikes, possibly equal, in the highdimensional case. Journal Of Multivariate Analysis, 127:173–183.
 Passemier and Yao (2012) Passemier, D. and Yao, J.F. (2012). On determining the number of spikes in a highdimensional spiked population model. Random Matrices: Theory and Applications, 1(01):1150002.
 Shabalin and Nobel (2013) Shabalin, A. A. and Nobel, A. B. (2013). Reconstruction of a lowrank matrix in the presence of gaussian noise. Journal of Multivariate Analysis, 118:67–76.
 Silverstein (1995) Silverstein, J. W. (1995). Strong convergence of the empirical distribution of eigenvalues of large dimensional random matrices. Journal of Multivariate Analysis, 55(2):331–339.
 Silverstein and Choi (1995) Silverstein, J. W. and Choi, S.I. (1995). Analysis of the limiting spectral distribution of large dimensional random matrices. Journal of Multivariate Analysis, 54(2):295–309.
 Wax and Kailath (1985) Wax, M. and Kailath, T. (1985). Detection of signals by information theoretic criteria. IEEE Transactions on Acoustics, Speech, and Signal Processing, 33(2):387–392.
 Wongsawat et al. (2005) Wongsawat, Y., Rao, K., and Oraintara, S. (2005). Multichannel svdbased image denoising. In Circuits and Systems, 2005. ISCAS 2005. IEEE International Symposium on, pages 5990–5993. IEEE.

Yao et al. (2015)
Yao, J., Bai, Z., and Zheng, S. (2015).
Large sample covariance matrices and highdimensional data analysis
. Number 39. Cambridge University Press.
Comments
There are no comments yet.