1 Introduction
Due to the advancement of technologies, researchers in various fields of sciences often encounter the analysis of massive data sets to extract useful information for new scientific discoveries. Many of these data sets contain a large number of features , often exceeding the number of observations. Such data sets are common in the fields of microarray gene expression data analysis [1, 2]; medical image analysis [3], signal processing, astrometry and finance [4]. Analysis of such high dimension, low sample size data sets present a substantial challenge, known as the “large , small ” problem. One of the most prominent problems that researchers are interested in is making inferences on the mean structure of the population. However, many well known classical multivariate methods cannot be used to make inferences on the mean structures for large small
data. For example, because of the singularity of the estimated pooled covariance matrix, the classical Hotelling’s
statistic [5] breaks down for the two sample test when the dimension of the data exceeds the combined sample size.Over the last few years, researchers have turned their attention to developing statistical methods that can handle the large small problem. A series of important works have been done on the twosample location problem in the large small setup, and several parametric and nonparametric tests have been developed. Most of these prior works are primarily motivated by creating methodologies that avoid the issues that Hotelling’s statistic faces in the large small scenario, particularly the singularity of the pooled sample covariance matrix. An early important work in this direction was developed by Bai and Saranadasa [6] where they considered the squared Euclidean norm between sample means of the two populations as an alternative to the Mahalanobis distance used in Hotelling’s statistic. However, one of the criticisms (see [7]
) of this method is the assumption of equal variance structures for
the two populations, which is hard to verif y Addressing this shortcoming, Chen and Qin [7] suggest ed an alternative method that does not assume the equality of variancecovariance matrices and removes the crossproduct terms from the square d Euclidean norm of difference of sample means between the two populations. Another method proposed by Srivastava [8] replaces the inverse of the sample covariance matrix in Hotelling’s statistic with the inverse of the diagonal of the covariance matrix. Modifications to this work [9, 10] have relaxed some of the assumptions, resulting in relatively similar test statistics with improved performance. Recently, Gregory et al. [11] suggest ed a test procedure, known as generalized component test (GCT), that bypasses the full estimation of the covariance matrix by assuming that there is a logical ordering among the components in such a way that the dependence between any two components is related to their displacement. In a development of another direction, Cai et al. [4]suggest a test procedure based on linear transformation of the data by a precision matrix, where the test statistic is the supnorm of marginal
statistics of transformed data. Tests developed in [6, 7, 8, 9, 11] are designed for testing dense but possibly weak signals, i.e., there are a large number of small nonzero means differences. Test suggested in [4] is designed for testing sparse signals, i.e., when there are a small number of large nonzero means differences. The implementation of test [4] requires sparsity structure assumptions on the covariance or precision matrix, which may not be satisfied in real applications. Furthermore, it also requires an estimate of the precision matrix, which is timeconsuming for large , see [12].Driven by the above two concerns of the test [4], we reconsider the problem of testing two mean vectors under sparse alternatives. We propose a new test based on the prepivoting approach. The concept of prepivoting is introduced by Beran [12]
. Prepivoting is a mechanism that transforms a root, a function of samples and parameters, by its estimated distribution function to a random variable, known as prepivoted root. The characteristics of the distribution of a prepivoted root are very similar to that of a uniform distribution on (0,1). Thus, the distribution of the prepivoted root is less dependent on population parameters than the distribution of the root. Consequently, approximate inference based on the prepivoted root is more accurate than approximate inference based on a original root. Prepivoting has not been considered previously in highdimensional testing problems. A snapshot of the development of our proposed test method is as follows: we construct prepivoting marginally for all the
variables individually using an asymptotic refinement, particularly the Egdeworth exapnsion. Then the marginal prepivoted roots are combined using the maxnorm.The rest of paper is organized as follow. In Section 2, we formulate the construction of the suggested test method for the hypothesis testing problem (1) based on the prepivoting technique technique and study the limiting distribution null distribution of the test statistic. In this section, we also analyze the power of the suggested test. Simulation studies are presented in Section 3 and Section 4 presents application of the proposed test to two microarray gene expression data sets. We finally conclude the article with some brief remarks in Section 5. Proofs and other theoretical details are relegated to the Appendix.
2 Methodology
Let and be independently and identically distributed (i.i.d.) samples drawn from two variate distributions and , respectively. Let and denote the mean vector and covariance matrix of respectively; and denote the mean vector and covariance matrix of respectively. The primary focus is on testing equality of mean vectors of the two populations,
(1) 
The proposed method for the two sample mean vector test (1) adopts the prepivoting technique introduced by Beran [12]
. This technique is useful to draw more accurate statistical inferences in the absence of a pivotal quantity. Prepivoting technique transforms the original root, can be functionally dependent on data and parameters, into another value through its estimated cumulative distribution function (cdf). The new value is known as prepivoted root or prepivot.
2.1 Overview On Prepivoting
For the sake of simplicity, we firstly introduce the concept of prepivoting. Consider a statistical model , where is unknown and we are interested in the hypothesis test, vs. . Let be a random sample from . In frequentist inference problem, one often compares the observed value of a root, , under
to the quantile of that root’s null distribution. We reject
at a significance level whenever is greater than , is cdf of the root under . Unless is a pivotal quantity, depends on . If is not a pivotal quantity, then its null distribution can be derived only through . Thus a hypothesis test based on a non pivotal quantity is not exact. Furthermore, the actual level of a test based on a non pivotal root differs from the nominal level.In such cases, the prepivoting is a useful technique to reduce errors associated with inference. The main idea behind Beran’s prepivoting technique is to transform the original root to a new root whose sampling distribution is “smoother” in the sense that its dependence on the population distribution is reduced, where is an estimate of .
If the distribution is unknown, then prepivots are constructed based on the empirical distribution function . Let denote the bootstrap estimate of , and be a random sample from . The prepivot is given by , and it has been established [12] that is closer to being pivotal than the original root or more precisely an approximately U(0,1) random variable. Let denote the bootstrap estimate of , the cdf of . The statistical test based on prepivoting rejects at level if . Prepivoting usually is accomplished by a Monte Carlo simulation ([12, 13]) known as nested doublebootstrap algorithm. This algorithm consists of an outer and an inner level of non parametric bootstrap, which give the estimated cdfs and , respectively. However, prepivoting has been criticized for its computational cost due to the nested doublebootstrap algorithm ([14]; [15]).
2.2 Proposed Test Method
To facilitate the construction of the proposed test, consider the elements of the random vectors, and , . As a procedure for the twosample mean vector test given in (1) based on prepivoting, consider the root
(2) 
where  is the component of
, which is an unbiased estimator for the
element of , viz. . The quantities and are plug in estimators of diagonal element of and , respectively. Let denote the version of under . Then , .Here, we are interested for detecting relatively sparse signals so statistics of the maximumtype such are more useful than sum of squarestype statistics (see [4]). Thus, at first glance, the test statistic is a potential candidate for the testing problem 1. Under normality, the roots follow distribution with degrees of freedom under , provided . For unequal variances, we encounter the BehrensFisher problem and even under normality, the roots are nonpivotal. For any general distribution, deriving a pivotal root is highly infeasible and hence the corresponding inferences are approximate inferences. To overcome the drawback of nonpivotal roots, the roots can be prepivoted using its empirical distribution for reducing errors involved in approximate inferences.
Suppose that denotes the cdf of , whic is generally unknown. Let denote the bootstrap estimator of . Instead of ’s, we consider the prepivots or the transformed roots to construct a test for twosample mean vectors test. As discussed above in Subsection 2.1, is a better pivot than the original root .
Our test procedure is based on the intuition that if only a very few components of and are unequal, then a test based on the maxnorm should detect that and are different with greater power than tests based on sumofsquares of sample mean differences. With this intuition, a natural choice is the test based on the maximum norm of . Thus, we define the test statistic for the twosample mean vector test as
(3) 
The main advantage of over is that is less dependent on the characteristics of the population distributions and . Due to the unavailability of the sampling distribution of , we can approximate the critical value or the pvalue of a test based on by means of nonparamteric bootstrap, where is the CDF of . This approach can be challenging because the computational complexity grow s with the dimension at a linear rate. To avoid such computational burden, we replace each by an analytical approximation based on Edgeworth expansion theory.
2.3 Edgeworth expansion based prepivots and the suggested test statistic
The Edgeworth expansion is a sharpening of the central limit approximation to include
higherorder moment
terms involving the skewness and kurtosis of the population distribution. The development of the Edgeworth expansion begins with the derivation of series expansion for the density and distribution functions of normalized
i.i.d. sample means [16]. A rigours treatment of these expansion was considered by Cramér’s [17]. Later on, Bhattacharya and Ghosh ([18]) developed the Edgeworth expansion theory for a smooth function of i.i.d. sample mean vectors.To construct Edgeworth expansion based prepivots, it is necessary to obtain the Edgeworth expansion of , the CDF of . Theorem 2.1 provides the such asymptotic expansions. Throughout this paper, we assume that

, and . That is, marginal eighth order moments of the random vectors and are uniformly bounded.

as .
Theorem 2.1.
Under the assumptions (A1), (A2), and (A3), the CDF of has the following Edgeworth expansion
(4) 
holds uniformly in any ; where
Proof. See Appendix
In Theorem 2.1, and denote the skewness of the component of and , respectively and and denote the kurtosis of the component of and , respectively; where
The bootstrap estimators have similar asymptotic expansions as given in Theorem 2.1 but in their expansion the population moments are replaced with the corresponding sample moments. Sample analogues of asymptotic expansions in Proposition 2.1 are
(5) 
for any , where are the sample versions of . The asymptotic expansions in (5) suggest the following secondorder analytical approximations to prepivoted test statistics :
(6) 
Thus, (6) indicates that the corresponding analytical approximation to can be expressed as
The approximation is a better choice than since it is computationally more attractive. However, it may be difficult to obtain the distribution for . One approach is to use the bootstrap to estimate the CDF . To avoid the Monte Carlo approximation associated the bootstrap technique, we consider a transformationbased approach. The idea is as follows.
If a monotone transformation of has a limiting distribution, then we can use that transformation to develop the rejection region of the test. Due to the monotonicity of the aforementioned transformation, the inference based on or a monotone transformation of will be same. Let
denote the inverse function of the cdf of a standard normal distribution. Theorem
2.2 shows that(7) 
has an extreme value distribution as . Thus, is better choice than at least from the computational cost perspective.
To obtain the limiting distribution of , we introduce two more assumptions which are similar to the assumptions of Lemma 6 of [4]. Expression (A.5) in the Appendix shows that , where goes to
in probability faster than
. The probability integral transformation implies that under , and further jointly follow as a dimensional multivariate normal distribution with zero mean vector and covariance matrix (see [20]). is the Pearson correlation between and . For fixed , converges weakly to based on a multivariate version of Slutsky Theorem. Thus, to establish the the weak convergence of when , assumptions (A4) and (A5) are imposed on the , the covariance matrix of the random vector .
for some . (A4) is mild since would imply that the is singular.

for some .
Theorem 2.2.
Let . Then under the assumptions (A1)–(A5), for any ,
Proof See Appendix
The limit distribution is type I extreme value distribution. On the basis of the limiting null distribution, the approximate pvalue is when grows at a rate smaller than . Based on Theorem 2.2, we can also construct asymptotically
level test that rejects the null hypothesis in (
1) if , where .Our test for small sample sizes and large dimension is in the same spirit as the test proposed by Cai et al. [3] for analysis of rare signals. Their test involves estimation of the inverse of covariance matrices. Matrix inversion it is timeconsuming when is large, whereas our test is computationally more efficient.
We now analyse the power of the test. To study the asymptotic power, we consider a local alternative condition that the maximum absolute value of the standardized signals is higher in order than that is
(8) 
Theorem 2.3 shows that if (8) holds, then the power of the test, , converges to 1 as .
Proof See Appendix
Theorem 2.3 indicates that the proposed test procedure should detect the discrepancy between two mean vectors , and with probability tending probability to 1 if the maximum of the absolute standardized signals is of order higher than as . In comparison, Cai et al.’s test [4], which is based on a supremumtype test statistic, has an asymptotic power 1 provided the maximum of the absolute standardized signals is of higher order than , for a certain unknown constant . The asymptotic powers of sumofsquarestype tests (Bai and Saranadasa [7]; Srivastava and Du [8]; Chen and Qin [9]) converge to 1 when as . The asymptotic power of GCT (Gregory et al. [12]) converges to 1 provided as .
To better understand the asymptotic power of the proposed test in comparison to the other tests, consider the following scenario. Suppose that all the signals are of equal strength , have unit variances and equal sample sizes for the two groups, . Under the alternative, let denote the set of locations of the signals, and let the cardinality of be , where . Under this scenario, power of the sumofsquarestype tests converge to 1 as if , that is if . Similarly, the power of the GCT test converges to 1 if , as . On the other hand, the power of our suggested test procedure goes to 1 as if . Hence the power of sumofsquarestype tests depend on how large compared to . In comparison, the power of our proposed test depends on how large compared to , which has a much smaller rate of increase compared to . Under this particular example, the sumofsquarestype tests are less powerful when compared to . Extending further, we may say that the our test procedure is asymptotically more powerful than the sumofsquarestype tests with .
3 Simulation Study
In this section we present empirical results based on simulated data. We compare the empirical type I error rate and power of the proposed test procedure, denoted by PREPR, under different models with the following test methods: BS (Bai and Saranadasa [7]), SD (Srivastava and DU [8]), CQ (Chen and Qin [9]), CLX (Cai et al. [4]), GM (Karl et al.[12]), GL and GM (Karl et al.[12]). GM and GL are two variants of [12]: moderatep, and largep. Following [12], we chose the Parzen window with lag window size for both GM and GL. The number of variety of multivariate distributions and parameters are too large to allow a comprehensive, allencompassing comparison. Hence we have chosen certain representative examples for illustration. We considered two setups to generate the data.
3.1 Simulation setup 1
In setup 1, we considered two scenarios for the marginal distributions of the elements of the random vectors and . In Scenario 1, each of the components of the random vectors and follow a standard normal distribution. In Scenario 2, each of the components of the random vectors and follow a centralized gamma(2,1) distribution.
To impose a dependence structure on joint distributions of
and , we used a factor model as described in [9]. Simulation setup 1 considers variances for the all components of and . Under each of the above two scenarios, we consider three dependence models to generate correlated observations:
(M1) Model 1 considers a moving average process of order 10. The moving coefficients vector was a normalized vector of dimension 10, where the components of that vector were generated independently from Uniform (2, 3) and were kept fixed once generated through our simulations.

(M3) In Model 3, we set , where and is the all ones matrix.
We considered three values for the dimension: and . For each combination of scenario (1 or 2), model (M1/M2/M3) and , the empirical type I error and power were computed based on 2000 randomly generated data sets. The empirical powers were computed on against the numbers of signals , where we considered and . The parameter denotes the sparsity of the signal. The sample sizes were set as and . In all the scenarios, we set , where and denotes a vector with entries equal to 0. In setup 1, we computed powers for . For the empirical type I error calculation, we assumed . Without loss of generality, we set , the mean vector of as the zero vector for all simulations.
r  Model  
PREPR  BS  SD  CQ  GM  GL  CLX  PREPR  BS  SD  CQ  GM  GL  CLX  
M1  0.6  0.056  0.066  0.045  0.067  0.142  0.158  0.093  0.086  0.092  0.059  0.100  0.146  0.149  0.129  
M2  0.138  0.105  0.093  0.106  0.065  0.049  0.222  0.219  0.147  0.122  0.146  0.0645  0.053  0.294  
M3  0.122  0.104  0.076  0.103  0.066  0.064  0.205  0.218  0.126  0.100  0.124  0.044  0.044  0.284  
M1  0.9  0.142  0.082  0.061  0.082  0.135  0.146  0.213  0.271  0.112  0.074  0.111  0.116  0.122  0.400  
M2  0.475  0.221  0.191  0.221  0.056  0.039  0.569  0.731  0.313  0.289  0.314  0.046  0.033  0.788  
M3  0.460  0.201  0.165  0.201  0.059  0.042  0.560  0.705  0.279  0.244  0.280  0.047  0.034  0.7660  
M1  0.6  0.069  0.100  0.073  0.099  0.132  0.133  0.117  0.114  0.123  0.093  0.122  0.122  0.127  0.165  
M2  0.211  0.266  0.224  0.2667  0.083  0.049  0.319  0.372  0.366  0.340  0 0.365  0.063  0.041  0.461  
M3  0.186  0.217  0.179  0.219  0.060  0.041  0.2725  0.333  0.302  0.272  0.303  0.047  0.036  0.420  
M1  0.9  0.136  0.098  0.059  0.098  0.077  0.064  0.236  0.327  0.187  0.138  0.188  0.082  0.075  0.402  
M2  0.704  0.642  0.599  0.641  0.049  0.027  0.786  0.887  0.825  0.792  0.824  0.033  0.018  0.927  
M3  0.628  0.533  0.482  0.532  0.049  0.029  0.726  0.850  0.735  0.699  0.733  0.028  0.017  0.908  
M1  0.6  0.101  0.190  0.145  0.190  0.121  0.119  0.168  0.171  0.260  0.208  0.258  0.126  0.113  0.229  
M2  0.375  0.708  0.669  0.708  0.153  0.103  0.509  0.630  0.885  0.864  0.884  0.129  0.091  0.725  
M3  0.360  0.654  0.607  0.654  0.139  0.089  0.4945  0.568  0.823  0.794  0.825  0.125  0.089  0.664  
M1  0.6  0.334  0.400  0.332  0.400  0.127  0.107  0.436  0.519  0.565  0.494  0.561  0.147  0.128  0.598  
M2  0.916  0.996  0.993  0.996  0.123  0.069  0.969  0.993  0.999  0.999  0.999  0.069  0.049  0.997  
M3  0.901  0.992  0.986  0.992  0.109  0.074  0.944  0.985  0.999  0.999  0.999  0.089  0.067  0.993 
r  Model  
PREPR  BS  SD  CQ  GM  GL  CLX  PREPR  BS  SD  CQ  GM  GL  CLX  
M1  0.6  0.050  0.077  0.060  0.077  0.157  0.178  0.088  0.083  0.079  0.059  0.078  0.135  0.153  0.127  
M2  0.138  0.0987  0.085  0.104  0.058  0.052  0.225  0.264  0.131  0.127  0.138  0.071  0.0507  0.335  
M3  0.124  0.095  0.082  0.100  0.058  0.064  0.199  0.250  0.135  0.108  0.138  0.056  0.055  0.326  
M1  0.9  0.161  0.091  0.063  0.091  0.122  0.136  0.231  0.276  0.104  0.075  0.105  0.115  0.118  0.346  
M2  0.498  0.240  0.207  0.251  0.065  0.0373  0.595  0.700  0.290  0.263  0.293  0.0507  0.0287  0.782  
M3  0.473  0.193  0.177  0.200  0.056  0.033  0.580  0.692  0.267  0.234  0.269  0.044  0.027  0.769  
M1  0.6  0.213  0.159  0.111  0.160  0.104  0.103  0.290  0.107  0.123  0.086  0.125  0.117  0.126  0.160  
M2  0.192  0.251  0.222  0.263  0.0767  0.037  0.300  0.432  0.375  0.369  0.383  0.085  0.037  0.534  
M3  0.186  0.217  0.179  0.219  0.060  0.041  0.273  0.382  0.308  0.267  0.312  0.057  0.026  0.465  
M1  0.9  0.136  0.098  0.059  0.098  0.077  0.064  0.236  0.361  0.192  0.151  0.195  0.089  0.081  0.440  
M2  0.674  0.611  0.591  0.619  0.043  0.0127  0.787  0.890  0.816  0.800  0.817  0.024  0.0087  0.934  
M3  0.624  0.537  0.513  0.545  0.044  0.014  0.753  0.869  0.757  0.728  0.756  0.025  0.011  0.912  
M1  0.6  0.107  0.182  0.145  0.184  0.139  0.134  0.171  0.211  0.242  0.187  0.241  0.123  0.110  0.274  
M2  0.384  0.710  0.696  0.715  0.146  0.056  0.565  0.690  0.893  0.879  0.892  0.147  0.067  0.787  
M3  0.371  0.635  0.607  0.639  0.124  0.054  0.531  0.666  0.829  0.811  0.829  0.139  0.078  0.762  
M1  0.9  0.327  0.388  0.320  0.390  0.122  0.090  0.443  0.545  0.557  0.488  0.557  0.146  0.123  0.634  
M2  0.228  0.403  0.313  0.404  0.169  0.053  0.398  0.299  0.711  0.553  0.711  0.290  0.095  0.555  
M3  0.922  0.993  0.995  0.994  0.115  0.039  0.968  0.993  1.000  1.000  1.000  0.061  0.033  0.998 
r  Model  
PREPR  BS  SD  CQ  GM  GL  CLX  PREPR  BS  SD  CQ  GM  GL  CLX  
M1  0.6  0.059  0.067  0.039  0.067  0.074  0.078  0.126  0.061  0.062  0.025  0.063  0.074  0.044  0.166  
M2  0.095  0.096  0.064  0.097  0.121  0.051  0.202  0.114  0.112  0.051  0.112  0.235  0.046  0.258  
M3  0.104  0.081  0.047  0.081  0.083  0.047  0.209  0.106  0.109  0.049  0.108  0.194  0.041  0.264  
M1  0.9  0.105  0.085  0.047  0.085  0.081  0.078  0.2079  0.116  0.086  0.037  0.086  0.095  0.057  0.252  
M2  0.113  0.079  0.049  0.078  0.081  0.077  0.215  0.391  0.244  0.125  0.245  0.245  0.037  0.670  
M3  0.449  0.255  0.180  0.256  0.094  0.032  0.640  0.401  0.227  0.109  0.227  0.210  0.040  0.655  
M1  0.6  0.060  0.074  0.041  0.073  0.082  0.083  0.128  0.072  0.085  0.036  0.084  0.095  0.060  0.179  
M2  0.120  0.136  0.090  0.137  0.131  0.053  0.247  0.152  0.204  0.105  0.205  0.267  0.041  0.339  
M3  0.124  0.117  0.074  0.116  0.094  0.042  0.230  0.163  0.188  0.086  0.188  0.219  0.05  0.342  
M1  0.9  0.136  0.098  0.059  0.098  0.077  0.064  0.236  0.159  0.135  0.065  0.134  0.111  0.050  0.303  
M2  0.463  0.308  0.228  0.308  0.119  0.036  0.670  0.574  0.534  0.375  0.536  0.193  0.027  0.838  
M3  0.447  0.266  0.194  0.265  0.097  0.034  0.624  0.562  0.457  0.293  0.459  0.169  0.029  0.809  
M1  0.6  0.082  0.106  0.065  0.106  0.080  0.065  0.168  0.092  0.156  0.071  0.157  0.134  0.050  0.210  
M2  0.228  0.403  0.313  0.404  0.169  0.053  0.398  0.299  0.711  0.553  0.711  0.290  0.095  0.555  
M3  0.215  0.334  0.245  0.335  0.139  0.0545  0.392  0.323  0.633  0.462  0.634  0.361  0.085  0.546  
M1  0.9  0.225  0.215  0.144  0.214  0.088  0.056  0.357  0.289  0.378  0.224  0.380  0.184  0.052  0.503  
M2  0.769  0.880  0.821  0.882  0.084  0.023  0.917  0.893  0.997  0.991  0.997  0.248  0.046  0.990  
M3  0.746  0.809  0.732  0.809  0.095  0.032  0.887  0.884  0.996  0.980  0.996  0.240  0.057  0.988  
M1  0.6  0.062  0.075  0.051  0.076  0.080  0.077  0.117  0.075  0.079  0.048  0.076  0.079  0.076  0.137  
M2  0.167  0.111  0.084  0.113  0.100  0.055  0.271  0.190  0.149  0.090  0.150  0.195  0.052  0.347  
M3  0.141  0.103  0.071  0.102  0.087  0.049  0.245  0.171  0.140  0.078  0.140  0.163  0.053  0.318  
M1  0.9  0.180  0.088  0.058  0.087  0.072  0.070  0.281  0.184  0.101  0.052  0.100  0.081  0.055  0.316  
M2  0.536  0.229  0.176  0.227  0.091  0.040  0.754  0.639  0.374  0.251  0.373  0.160  0.036  0.885  
M3  0.689  0.389  0.317  0.390  0.086  0.033  0.839  0.630  0.310  0.193  0.307  0.142  0.046  0.853  
M1  0.6  0.075  0.082  0.052  0.083  0.077  0.076  0.132  0.089  0.096  0.054  0.096  0.089  0.059  0.175  
M2  0.224  0.189  0.144  0.188  0.114  0.050  0.347  0.254  0.166  0.101  0.167  0.098  0.048  0.406  
M3  0.196  0.164  0.122  0.162  0.100  0.047  0.310  0.274  0.250  0.160  0.249  0.175  0.049  0.431  
M1  0.9  0.224  0.117  0.079  0.114  0.069  0.059  0.323  0.254  0.166  0.101  0.167  0.099  0.048  0.406  
M2  0.709  0.447  0.371  0.448  0.068  0.026  0.875  0.840  0.767  0.660  0.768  0.0882  0.016  0.976  
M3  0.678  0.386  0.325  0.386  0.072  0.034  0.837  0.833  0.681  0.549  0.675  0.091  0.025  0.961  
M1  0.6  0.101  0.134  0.092  0.134  0.073  0.059  0.182  0.137  0.211  0.130  0.212  0.130  0.063  0.249  
M2  0.422  0.582  0.502  0.581  0.116  0.050  0.596  0.534  0.889  0.817  0.889  0.407  0.089  0.757  
M3  0.376  0.493  0.423  0.495  0.117  0.057  0.531  0.522  0.865  0.761  0.862  0.279  0.094  0.747  
M1  0.9  0.382  0.303  0.229  0.303  0.086  0.055  0.507  0.893  0.997  0.991  0.997  0.248  0.046  0.990  
M2  0.946  0.981  0.967  0.980  0.0304  0.011  0.989  0.991  1.000  1.000  1.000  0.109  0.025  1.000  
M3  0.943  0.964  0.943  0.963  0.038  0.016  0.984  0.990  1.000  1.000  1.000  0.124  0.030  1.000 
r  Model  
PREPR  BS  SD  CQ  GM  GL  CLX  PREPR  BS  SD  CQ  GM  GL  CLX  
M1  0.6  0.058  0.060  0.034  0.061  0.073  0.095  0.111  0.068  0.073  0.031  0.074  0.089  0.087  0.132  
M2  0.080  0.086  0.054  0.091  0.113  0.100  0.185  0.077  0.109  0.046  0.114  0.241  0.191  0.207  
M3  0.078  0.086  0.051  0.089  0.097  0.074  0.177  0.089  0.097  0.041  0.099  0.183  0.094  0.228  
M1  0.9  0.105  0.080  0.044  0.082  0.082  0.080  0.209  0.109  0.100  0.040  0.101  0.096  0.061  0.238  
M2  0.325  0.156  0.107  0.164  0.112  0.042  0.556  0.376  0.233  0.133  0.244  0.227  0.047  0.688  
M3  0.315  0.139  0.099  0.147  0.101  0.035  0.524  0.375  0.208  0.107  0.212  0.203  0.024  0.652  
M1  0.6  0.059  0.078  0.045  0.081  0.079  0.086  0.117  0.062  0.080  0.030  0.080  0.092  0.056  10.141  
M2  0.108  0.129  0.088  0.134  0.123  0.066  0.227  0.140  0.212  0.115  0.217  0.287  0.067  0.328  
M3  0.111  0.111  0.075  0.115  0.096  0.050  0.229  0.134  0.158  0.072  0.164  0.202  0.035  0.309  
M1  0.9  0.133  0.100  0.059  0.101  0.081  0.073  0.243  0.146  0.123  0.059  0.123  0.112  0.050  0.305  
M2  0.448  0.294  0.234  0.305  0.115  0.013  0.698  0.574  0.530  0.370  0.543  0.157  0.035  0.876  
M3  0.446  0.261  0.192  0.270  0.114  0.017  0.652  0.540  0.455  0.308  0.461  0.186  0.004  0.820  
M1  0.6  0.079  0.110  0.063  0.112  0.078  0.065  0.158  0.287  0.389  0.249  0.392  0.194  0.056  0.509  
M2  0.221  0.394  0.324  0.405  0.167  0.014  0.424  0.279  0.700  0.559  0.710  0.226  0.055  0.584  
M3  0.208  0.308  0.239  0.312  0.134  0.018  0.385  0.283  0.621  0.473  0.630  0.359  0.015  0.567  
M1  0.9  0.219  0.215  0.148  0.217  0.094  0.054  0.349  0.070  0.151  0.070  0.156  0.133  0.056  0.183  
M2  0.765  0.875  0.840  0.880  0.074  0.058  0.933  0.884  0.996  0.992  0.996  0.381  0.047  0.994  
M3  0.764  0.801  0.753  0.809  0.099  0.009  0.917  0.888  0.996  0.990  0.996  0.251  0.005  0.994  
M1  0.6  0.064  0.067  0.042  0.066  0.077  0.084  0.121  0.088  0.071  0.044  0.071  0.079  0.066  10.46  
M2  0.191  0.114  0.082  0.118  0.116  0.05  0.3147  0.233  0.146  0.082  0.153  0.232  0.056  0.415  
M3  0.200  0.087  0.063  0.090  0.075  0.044  0.311  0.206  0.119  0.063  0.120  0.172  0.041  0.376  
M1  0.9  0.184  0.087  0.058  0.090  0.071  0.069  0.291  0.202  0.095  0.049  0.093  0.079  0.051  0.334  
M2  0.543  0.232  0.190  0.237  0.112  0.021  0.787  0.662  0.361  0.258  0.369  0.176  0.033  0.913  
M3  0.542  0.214  0.173  0.214  0.102  0.024  0.744  0.614  0.299  0.200  0.301  0.163  0.009  0.869  
M1  0.6  0.082  0.079  0.053  0.078  0.079  0.084  0.144  0.103  0.095  0.047  0.094  0.085  0.053  0.188  
M2  0.253  0.174  0.134  0.181  0.120  0.030  0.409  0.331  0.280  0.185  0.290  0.223  0.011  0.568  
M3  0.235  0.145  0.110  0.150  0.091  0.031  0.363  0.318  0.249  0.172  0.253  0.212  0.013  0.517  
M1  0.9  0.256  0.117  0.082  0.118  0.073  0.065  0.360  0.281  0.169  0.098  0.166  0.106  0.044  0.434  
M2  0.708  0.450  0.400  0.457  0.079  0.010  0.889  0.983  0.774  0.689  0.781  0.106  0.027  0.983  
M3  0.693  0.387  0.326  0.393  0.065  0.013  0.870  0.836  0.677  0.567  0.683  0.110  0.004  0.971  
M1  0.6  0.131  0.139  0.098  0.139  0.074  0.058  0.217  0.161  0.205  0.120  0.207  0.126  0.052  0.290  
M2  0.502  0.557  0.504  0.562  0.129  0.013  0.685  0.665  0.894  0.840  0.893  0.329  0.046  0.870  
M3  0.478  0.476  0.407  0.480  0.116  0.016  0.640  0.632  0.847  0.763  0.848  0.299  0.033  0.854  
M1  0.9  0.418  0.303  0.230  0.306  0.087  0.0485  0.545  0.576  0.570  0.443  0.571  0.186  0.065  0.736  
M2  0.950  0.977  0.971  0.978  0.030  0.011  0.994  0.994  1.000  1.000  1.000  0.122  0.060  1.000  
M3  0.951  0.955  0.934  0.955  0.043  0.010  0.995  0.997  1.000  1.000  1.000  0.132  0.008  1.000 
3.2 Simulation setup 2
In Simulation setup 2, we use the same setup as in Simulation setup 1, except that we now use covariance matrices with unequal variances are described in (M4) and (M5).

(M4) Model 4 considers , where D is a diagonal matrix with with diagonal elements =Unif(1,5), , , and if Such a covariance matrix was also used in [3].

(M5) In Model 5, , where D is a diagonal matrix with diagonal elements =Unif(1,5), and has a longrange dependence structure with diagonal that are all ones. has the similar structure as the the (M2) in Simulation setup 1. describe the structure
r  Model  
PREPR  BS  SD  CQ  GM  GL  CLX  PREPR  BS  SD  CQ  GM  GL  CLX  
M4  1.5  0.280  0.165  0.126  0.165  0.052  0.041  0.364  0.301  0.107  0.067  0.106  0.098  0.051  0.479  
M5  0.263  0.076  0.023  0.076  0.582  0.606  0.338  0.276  0.074  0.016  0.074  0.862  0.874  0.451  
M4  1.5  0.751  0.464  0.487  0.464  0.042  0.024  0.830  0.694  0.341  0.337  0.341  0.088  0.028  0.934  
M5  0.727  0.102  0.032  0.102  0.256  0.254  0.798  0.653  0.082  0.013  0.082  0.690  0.718  0.908  
M4  1.5  0.909  0.968  0.973  0.968  0.116  0.076  0.956  0.992  0.996  1.000  0.996  0.158  0.074  1.000  
M5  0.831  0.213  0.085  0.213  0.253  0.239  0.892  0.944  0.109  0.031  0.109  0.284  0.268  0.989  
M4  1.5  0.480  0.226  0.173  0.226  0.050  0.038  0.555  0.449  0.111  0.080  0.112  0.070  0.036  0.710  
M5  0.441  0.088  0.035  0.090  0.518  0.546  0.527  0.447  0.081  0.016  0.080  0.864  0.870  0.701  
M4  1.5  0.923  0.651  0.701  0.650  0.023  0.017  0.954  0.840  0.531  0.559  0.529  0.053  0.021  0.989  
M5  0.927  0.121  0.045  0.121  0.213  0.205  0.954  0.800  0.092  0.015  0.091  0.552  0.590  0.984  
M4  1.5  0.988  0.998  0.999  0.998  0.068  0.051  0.995  1.000  1.000  1.000  1.000  0.076  0.040  1.000  
M5  0.956  0.246  0.095  0.251  0.226  0.212  0.973  0.992  0.131  0.030  0.135  0.265  0.258  0.999 
r  Model  
PREPR  BS  SD  CQ  GM  GL  CLX  PREPR  BS  SD  CQ  GM  GL  CLX  
M4  1.5  0.252  0.166  0.136  0.173  0.059  0.042  0.3615  0.291  0.107  0.077  0.113  0.108  0.050  0.508  
M5  0.250  0.085  0.030  0.084  0.615  0.650  0.343  0.288  0.075  0.013  0.076  0.875  0.882  0.472  
M4  1.5  0.753  0.451  0.506  0.460  0.040  0.019  0.841  0.645  0.340  0.354  0.351  0.090  0.009  0.934  
M5  0.717  0.109  0.045  0.109  0.241  0.236  0.804  0.638  0.070  0.012  0.070  0.676  0.726  0.901  
M4  1.5  0.918  0.961  0.977  0.963  0.089  0.044  0.966  0.994  0.996  1.000  0.996  0.146  0.020  1.000  
M5  0.825  0.194  0.074  0.194  0.246  0.225  0.893  0.949  0.125  0.02758  0.125  0.299  0.272  0.988  
M4  1.5  0.457  0.223  0.186  0.235  0.048  0.030  0.551  0.440  0.140  0.119  0.142  0.117  0.035  0.700  
M5  0.467  0.083  0.028  0.083  0.519  0.549  0.543  0.430  0.073  0.013  0.075  0.868  0.872  0.680  
M4  1.5  0.925  0.661  0.715  0.667  0.020  0.011  0.955  0.864  0.516  0.567  0.520  0.055  0.011  0.984  
M5  0.895  0.106  0.041  0.111  0.210  0.199  0.936  0.793  0.072  0.014  0.071  0.542  0.591  0.977  
M4  1.5  0.991  0.997  0.999  0.997  0.063  0.034  0.998  1.000  1.000  1.000  1.000  0.068  0.015  1.000  
M5  0.958  0.245  0.093  0.255  0.219  0.208  0.977  0.995  0.129  0.029  0.132  0.256  0.238  0.999 
3.3 Discussion the results from the simulation
Results from Simulation setup 1 presented in Tables 13 display the empirical type I errors of the new test PREPR along with the tests BS, CQ, GL, GM, and CLX. These tables show that empirical type I errors of new test PREPR are close the nominal size 0.05 under the all scenarios and all the models, and they range from 0.032 to 0.051. The type I errors of tests BS and CQ are comparable. In some cases, they are liberal with values ranging from 0.041–0.071 and 0.043–0.070 respectively. Test SD seems to be conservative with increase , and its empirical typeI errors vary from 0.012 to 0.05. Whereas the tests CLX, GL and GM are generally liberal under all scenarios and models with their empirical type I errors ranging from 0.056–0.142, 0.059–0.191 and 0.051–0.165 respectively. Overall, in all of the considered scenarios and models under the simulation setup 1, the proposed test PREPR outperforms BS, CQ, SD, GM, GL and CLX in terms of size control as indicated by its better type I error accuracy.
Empirical type I errors corresponding to the Simulation setup 2 are presented in Tables 89. These tables show that type I errors of these tests are consistent with their empirical type I errors reported in Simulation setup 1 For example, the type I errors of PREPR range from 0.034–0.055, whereas those of BS,CQ, SD, GM, GL, and CLX range from 0.047–0.078, 0.048–0.078, 0.024–0.050, 0.059–0.088, 0.057–0.116 and 0.066–0.115, respectively.
The empirical powers of different tests under Simulation setup 1 are presented in Tables 47. Tables 4 and 5 report the powers for , and tables 6 and 7 report the powers for and respectively. The tables show that PREPR has more power overall than the tests BS, SD, CQ, GM, and GL when the signals are sparse  , and . As the signals become less sparse at , the tests BS, SD, and CQ are more powerful than PREPR, particularly when the signals are moderately strong with . Though tests GM and GL are liberal, tables 47 display that GM and GL have the smallest power in most cases. The test CLX enjoys the maximum power in all considered scenarios and models under the Simulation setup 1, irrespective of whether signals are sparse or dense. Such performance of CLX is not surprising since it showed inflated type I errors. Tables 1011 display empirical powers corresponding to Simulation setup 2. From these tables, we can observe that comparisons are similar to the cases that are reported Simulation setup 1.
In summary, based on the numerical results we have performed that PREPR shows better control on the type I error than the remaining tests, and is more powerful than sumofsquares tests BS, SD, CQ, GL, and GM against the sparse alternatives.
4 Data Analysis/Application
We considered an application of PREPR test for analyzing gene expression data in terms of gene sets. A geneset analysis refers to a statistical analysis to identify whether some functionally predefined sets of genes express differently under different experimental conditions. A geneset analysis is the beginning for biologists to understand the patterns in the differential expression. There are two major categories of a geneset test: competitive geneset test and selfcontained geneset test ([22]). The first one tests a set of genes, say , of interest with complementary set of genes which are not in . Selfcontained geneset test considers the null hypothesis that none of the genes in are differentially expressed. The proposed two sample test PREPR is applicable to the selfcontained geneset test. We apply
Comments
There are no comments yet.