Due to the advancement of technologies, researchers in various fields of sciences often encounter the analysis of massive data sets to extract useful information for new scientific discoveries. Many of these data sets contain a large number of features , often exceeding the number of observations. Such data sets are common in the fields of microarray gene expression data analysis [1, 2]; medical image analysis , signal processing, astrometry and finance . Analysis of such high dimension, low sample size data sets present a substantial challenge, known as the “large , small ” problem. One of the most prominent problems that researchers are interested in is making inferences on the mean structure of the population. However, many well known classical multivariate methods cannot be used to make inferences on the mean structures for large small
data. For example, because of the singularity of the estimated pooled covariance matrix, the classical Hotelling’sstatistic  breaks down for the two sample test when the dimension of the data exceeds the combined sample size.
Over the last few years, researchers have turned their attention to developing statistical methods that can handle the large small problem. A series of important works have been done on the two-sample location problem in the large small setup, and several parametric and non-parametric tests have been developed. Most of these prior works are primarily motivated by creating methodologies that avoid the issues that Hotelling’s -statistic faces in the large small scenario, particularly the singularity of the pooled sample covariance matrix. An early important work in this direction was developed by Bai and Saranadasa  where they considered the squared Euclidean norm between sample means of the two populations as an alternative to the Mahalanobis distance used in Hotelling’s -statistic. However, one of the criticisms (see 
) of this method is the assumption of equal variance structures forthe two populations, which is hard to verif y Addressing this shortcoming, Chen and Qin  suggest ed an alternative method that does not assume the equality of variance-covariance matrices and removes the cross-product terms from the square d Euclidean norm of difference of sample means between the two populations. Another method proposed by Srivastava  replaces the inverse of the sample covariance matrix in Hotelling’s -statistic with the inverse of the diagonal of the covariance matrix. Modifications to this work [9, 10] have relaxed some of the assumptions, resulting in relatively similar test statistics with improved performance. Recently, Gregory et al.  suggest ed a test procedure, known as generalized component test (GCT), that bypasses the full estimation of the covariance matrix by assuming that there is a logical ordering among the components in such a way that the dependence between any two components is related to their displacement. In a development of another direction, Cai et al. 
suggest a test procedure based on linear transformation of the data by a precision matrix, where the test statistic is the sup-norm of marginal-statistics of transformed data. Tests developed in [6, 7, 8, 9, 11] are designed for testing dense but possibly weak signals, i.e., there are a large number of small non-zero means differences. Test suggested in  is designed for testing sparse signals, i.e., when there are a small number of large non-zero means differences. The implementation of test  requires sparsity structure assumptions on the covariance or precision matrix, which may not be satisfied in real applications. Furthermore, it also requires an estimate of the precision matrix, which is time-consuming for large , see .
Driven by the above two concerns of the test , we reconsider the problem of testing two mean vectors under sparse alternatives. We propose a new test based on the prepivoting approach. The concept of prepivoting is introduced by Beran 
. Prepivoting is a mechanism that transforms a root, a function of samples and parameters, by its estimated distribution function to a random variable, known as prepivoted root. The characteristics of the distribution of a prepivoted root are very similar to that of a uniform distribution on (0,1). Thus, the distribution of the prepivoted root is less dependent on population parameters than the distribution of the root. Consequently, approximate inference based on the prepivoted root is more accurate than approximate inference based on a original root. Pre-pivoting has not been considered previously in high-dimensional testing problems. A snapshot of the development of our proposed test method is as follows: we construct prepivoting marginally for all thevariables individually using an asymptotic refinement, particularly the Egdeworth exapnsion. Then the marginal prepivoted roots are combined using the max-norm.
The rest of paper is organized as follow. In Section 2, we formulate the construction of the suggested test method for the hypothesis testing problem (1) based on the prepivoting technique technique and study the limiting distribution null distribution of the test statistic. In this section, we also analyze the power of the suggested test. Simulation studies are presented in Section 3 and Section 4 presents application of the proposed test to two microarray gene expression data sets. We finally conclude the article with some brief remarks in Section 5. Proofs and other theoretical details are relegated to the Appendix.
Let and be independently and identically distributed (i.i.d.) samples drawn from two variate distributions and , respectively. Let and denote the mean vector and covariance matrix of respectively; and denote the mean vector and covariance matrix of respectively. The primary focus is on testing equality of mean vectors of the two populations,
. This technique is useful to draw more accurate statistical inferences in the absence of a pivotal quantity. Prepivoting technique transforms the original root, can be functionally dependent on data and parameters, into another value through its estimated cumulative distribution function (cdf). The new value is known as prepivoted root or prepivot.
2.1 Overview On Prepivoting
For the sake of simplicity, we firstly introduce the concept of prepivoting. Consider a statistical model , where is unknown and we are interested in the hypothesis test, vs. . Let be a random sample from . In frequentist inference problem, one often compares the observed value of a root, , under
to the quantile of that root’s null distribution. We rejectat a significance level whenever is greater than , is cdf of the root under . Unless is a pivotal quantity, depends on . If is not a pivotal quantity, then its null distribution can be derived only through . Thus a hypothesis test based on a non pivotal quantity is not exact. Furthermore, the actual level of a test based on a non pivotal root differs from the nominal level.
In such cases, the prepivoting is a useful technique to reduce errors associated with inference. The main idea behind Beran’s prepivoting technique is to transform the original root to a new root whose sampling distribution is “smoother” in the sense that its dependence on the population distribution is reduced, where is an estimate of .
If the distribution is unknown, then prepivots are constructed based on the empirical distribution function . Let denote the bootstrap estimate of , and be a random sample from . The prepivot is given by , and it has been established  that is closer to being pivotal than the original root or more precisely an approximately U(0,1) random variable. Let denote the bootstrap estimate of , the cdf of . The statistical test based on prepivoting rejects at level if . Prepivoting usually is accomplished by a Monte Carlo simulation ([12, 13]) known as nested double-bootstrap algorithm. This algorithm consists of an outer and an inner level of non -parametric bootstrap, which give the estimated cdfs and , respectively. However, prepivoting has been criticized for its computational cost due to the nested double-bootstrap algorithm (; ).
2.2 Proposed Test Method
To facilitate the construction of the proposed test, consider the elements of the random vectors, and , . As a procedure for the two-sample mean vector test given in (1) based on prepivoting, consider the root
where - is the component of
, which is an unbiased estimator for theelement of , viz. . The quantities and are plug -in estimators of diagonal element of and , respectively. Let denote the version of under . Then , .
Here, we are interested for detecting relatively sparse signals so statistics of the maximum-type such are more useful than sum of squares-type statistics (see ). Thus, at first glance, the test statistic is a potential candidate for the testing problem 1. Under normality, the roots follow distribution with degrees of freedom under , provided . For unequal variances, we encounter the Behrens-Fisher problem and even under normality, the roots are non-pivotal. For any general distribution, deriving a pivotal root is highly infeasible and hence the corresponding inferences are approximate inferences. To overcome the drawback of non-pivotal roots, the roots can be pre-pivoted using its empirical distribution for reducing errors involved in approximate inferences.
Suppose that denotes the cdf of , whic is generally unknown. Let denote the bootstrap estimator of . Instead of ’s, we consider the prepivots or the transformed roots to construct a test for two-sample mean vectors test. As discussed above in Subsection 2.1, is a better pivot than the original root .
Our test procedure is based on the intuition that if only a very few components of and are unequal, then a test based on the max-norm should detect that and are different with greater power than tests based on sum-of-squares of sample mean differences. With this intuition, a natural choice is the test based on the maximum norm of . Thus, we define the test statistic for the two-sample mean vector test as
The main advantage of over is that is less dependent on the characteristics of the population distributions and . Due to the unavailability of the sampling distribution of , we can approximate the critical value or the p-value of a test based on by means of nonparamteric bootstrap, where is the CDF of . This approach can be challenging because the computational complexity grow s with the dimension at a linear rate. To avoid such computational burden, we replace each by an analytical approximation based on Edgeworth expansion theory.
2.3 Edgeworth expansion based prepivots and the suggested test statistic
The Edgeworth expansion is a sharpening of the central limit approximation to include higher-order moment
terms involving the skewness and kurtosis of the population distribution. The development of the Edgeworth expansion begins with the derivation of series expansion for the density and distribution functions of normalizedi.i.d. sample means . A rigours treatment of these expansion was considered by Cramér’s . Later on, Bhattacharya and Ghosh () developed the Edgeworth expansion theory for a smooth function of i.i.d. sample mean vectors.
To construct Edgeworth expansion based prepivots, it is necessary to obtain the Edgeworth expansion of , the CDF of . Theorem 2.1 provides the such asymptotic expansions. Throughout this paper, we assume that
, and . That is, marginal eighth order moments of the random vectors and are uniformly bounded.
Under the assumptions (A1), (A2), and (A3), the CDF of has the following Edgeworth expansion
holds uniformly in any ; where
Proof. See Appendix
In Theorem 2.1, and denote the skewness of the component of and , respectively and and denote the kurtosis of the component of and , respectively; where
The bootstrap estimators have similar asymptotic expansions as given in Theorem 2.1 but in their expansion the population moments are replaced with the corresponding sample moments. Sample analogues of asymptotic expansions in Proposition 2.1 are
for any , where are the sample versions of . The asymptotic expansions in (5) suggest the following second-order analytical approximations to prepivoted test statistics :
Thus, (6) indicates that the corresponding analytical approximation to can be expressed as
The approximation is a better choice than since it is computationally more attractive. However, it may be difficult to obtain the distribution for . One approach is to use the bootstrap to estimate the CDF . To avoid the Monte Carlo approximation associated the bootstrap technique, we consider a transformation-based approach. The idea is as follows.
If a monotone transformation of has a limiting distribution, then we can use that transformation to develop the rejection region of the test. Due to the monotonicity of the aforementioned transformation, the inference based on or a monotone transformation of will be same. Let
denote the inverse function of the cdf of a standard normal distribution. Theorem2.2 shows that
has an extreme value distribution as . Thus, is better choice than at least from the computational cost perspective.
in probability faster than. The probability integral transformation implies that under , and further jointly follow as a dimensional multivariate normal distribution with zero mean vector and covariance matrix (see ). is the Pearson correlation between and . For fixed , converges weakly to based on a multivariate version of Slutsky Theorem. Thus, to establish the the weak convergence of when , assumptions (A4) and (A5) are imposed on the , the covariance matrix of the random vector .
for some . (A4) is mild since would imply that the is singular.
for some .
Let . Then under the assumptions (A1)–(A5), for any ,
Proof See Appendix
The limit distribution is type I extreme value distribution. On the basis of the limiting null distribution, the approximate p-value is when grows at a rate smaller than . Based on Theorem 2.2, we can also construct asymptotically
-level test that rejects the null hypothesis in (1) if , where .
Our test for small sample sizes and large dimension is in the same spirit as the test proposed by Cai et al.  for analysis of rare signals. Their test involves estimation of the inverse of covariance matrices. Matrix inversion it is time-consuming when is large, whereas our test is computationally more efficient.
We now analyse the power of the test. To study the asymptotic power, we consider a local alternative condition that the maximum absolute value of the standardized signals is higher in order than that is
Proof See Appendix
Theorem 2.3 indicates that the proposed test procedure should detect the discrepancy between two mean vectors , and with probability tending probability to 1 if the maximum of the absolute standardized signals is of order higher than as . In comparison, Cai et al.’s test , which is based on a supremum-type test statistic, has an asymptotic power 1 provided the maximum of the absolute standardized signals is of higher order than , for a certain unknown constant . The asymptotic powers of sum-of-squares-type tests (Bai and Saranadasa ; Srivastava and Du ; Chen and Qin ) converge to 1 when as . The asymptotic power of GCT (Gregory et al. ) converges to 1 provided as .
To better understand the asymptotic power of the proposed test in comparison to the other tests, consider the following scenario. Suppose that all the signals are of equal strength , have unit variances and equal sample sizes for the two groups, . Under the alternative, let denote the set of locations of the signals, and let the cardinality of be , where . Under this scenario, power of the sum-of-squares-type tests converge to 1 as if , that is if . Similarly, the power of the GCT test converges to 1 if , as . On the other hand, the power of our suggested test procedure goes to 1 as if . Hence the power of sum-of-squares-type tests depend on how large compared to . In comparison, the power of our proposed test depends on how large compared to , which has a much smaller rate of increase compared to . Under this particular example, the sum-of-squares-type tests are less powerful when compared to . Extending further, we may say that the our test procedure is asymptotically more powerful than the sum-of-squares-type tests with .
3 Simulation Study
In this section we present empirical results based on simulated data. We compare the empirical type I error rate and power of the proposed test procedure, denoted by PREPR, under different models with the following test methods: BS (Bai and Saranadasa ), SD (Srivastava and DU ), CQ (Chen and Qin ), CLX (Cai et al. ), GM (Karl et al.), GL and GM (Karl et al.). GM and GL are two variants of : moderate-p, and large-p. Following , we chose the Parzen window with lag window size for both GM and GL. The number of variety of multivariate distributions and parameters are too large to allow a comprehensive, all-encompassing comparison. Hence we have chosen certain representative examples for illustration. We considered two setups to generate the data.
3.1 Simulation setup 1
In setup 1, we considered two scenarios for the marginal distributions of the elements of the random vectors and . In Scenario 1, each of the components of the random vectors and follow a standard normal distribution. In Scenario 2, each of the components of the random vectors and follow a centralized gamma(2,1) distribution. To impose a dependence structure on joint distributions of
To impose a dependence structure on joint distributions ofand , we used a factor model as described in . Simulation setup 1 considers variances for the all components of and . Under each of the above two scenarios, we consider three dependence models to generate correlated observations:
(M1) Model 1 considers a moving average process of order 10. The moving coefficients vector was a normalized vector of dimension 10, where the components of that vector were generated independently from Uniform (2, 3) and were kept fixed once generated through our simulations.
(M3) In Model 3, we set , where and is the all ones matrix.
We considered three values for the dimension: and . For each combination of scenario (1 or 2), model (M1/M2/M3) and , the empirical type I error and power were computed based on 2000 randomly generated data sets. The empirical powers were computed on against the numbers of signals , where we considered and . The parameter denotes the sparsity of the signal. The sample sizes were set as and . In all the scenarios, we set , where and denotes a -vector with entries equal to 0. In setup 1, we computed powers for . For the empirical type I error calculation, we assumed . Without loss of generality, we set , the mean vector of as the zero vector for all simulations.
3.2 Simulation setup 2
In Simulation setup 2, we use the same setup as in Simulation setup 1, except that we now use covariance matrices with unequal variances are described in (M4) and (M5).
(M4) Model 4 considers , where D is a diagonal matrix with with diagonal elements =Unif(1,5), , , and if Such a covariance matrix was also used in .
(M5) In Model 5, , where D is a diagonal matrix with diagonal elements =Unif(1,5), and has a long-range dependence structure with diagonal that are all ones. has the similar structure as the the (M2) in Simulation setup 1. describe the structure
3.3 Discussion the results from the simulation
Results from Simulation setup 1 presented in Tables 1-3 display the empirical type I errors of the new test PREPR along with the tests BS, CQ, GL, GM, and CLX. These tables show that empirical type I errors of new test PREPR are close the nominal size 0.05 under the all scenarios and all the models, and they range from 0.032 to 0.051. The type I errors of tests BS and CQ are comparable. In some cases, they are liberal with values ranging from 0.041–0.071 and 0.043–0.070 respectively. Test SD seems to be conservative with increase , and its empirical type-I errors vary from 0.012 to 0.05. Whereas the tests CLX, GL and GM are generally liberal under all scenarios and models with their empirical type I errors ranging from 0.056–0.142, 0.059–0.191 and 0.051–0.165 respectively. Overall, in all of the considered scenarios and models under the simulation setup 1, the proposed test PREPR outperforms BS, CQ, SD, GM, GL and CLX in terms of size control as indicated by its better type I error accuracy.
Empirical type I errors corresponding to the Simulation setup 2 are presented in Tables 8-9. These tables show that type I errors of these tests are consistent with their empirical type I errors reported in Simulation setup 1 For example, the type I errors of PREPR range from 0.034–0.055, whereas those of BS,CQ, SD, GM, GL, and CLX range from 0.047–0.078, 0.048–0.078, 0.024–0.050, 0.059–0.088, 0.057–0.116 and 0.066–0.115, respectively.
The empirical powers of different tests under Simulation setup 1 are presented in Tables 4-7. Tables 4 and 5 report the powers for , and tables 6 and 7 report the powers for and respectively. The tables show that PREPR has more power overall than the tests BS, SD, CQ, GM, and GL when the signals are sparse - , and . As the signals become less sparse at , the tests BS, SD, and CQ are more powerful than PREPR, particularly when the signals are moderately strong with . Though tests GM and GL are liberal, tables 4-7 display that GM and GL have the smallest power in most cases. The test CLX enjoys the maximum power in all considered scenarios and models under the Simulation setup 1, irrespective of whether signals are sparse or dense. Such performance of CLX is not surprising since it showed inflated type I errors. Tables 10-11 display empirical powers corresponding to Simulation setup 2. From these tables, we can observe that comparisons are similar to the cases that are reported Simulation setup 1.
In summary, based on the numerical results we have performed that PREPR shows better control on the type I error than the remaining tests, and is more powerful than sum-of-squares tests BS, SD, CQ, GL, and GM against the sparse alternatives.
4 Data Analysis/Application
We considered an application of PREPR test for analyzing gene expression data in terms of gene sets. A gene-set analysis refers to a statistical analysis to identify whether some functionally pre-defined sets of genes express differently under different experimental conditions. A gene-set analysis is the beginning for biologists to understand the patterns in the differential expression. There are two major categories of a gene-set test: competitive gene-set test and self-contained gene-set test (). The first one tests a set of genes, say , of interest with complementary set of genes which are not in . Self-contained gene-set test considers the null hypothesis that none of the genes in are differentially expressed. The proposed two sample test PREPR is applicable to the self-contained gene-set test. We apply