# Two-Sample High Dimensional Mean Test Based On Prepivots

Testing equality of mean vectors is a very commonly used criterion when comparing two multivariate random variables. Traditional tests such as Hotelling's T-squared become either unusable or output small power when the number of variables is greater than the combined sample size. In this paper, we propose a test using both prepivoting and Edgeworth expansion for testing the equality of two population mean vectors in the "large p, small n" setting. The asymptotic null distribution of the test statistic is derived and it is shown that the power of suggested test converges to one under certain alternatives when both n and p increase to infinity against sparse alternatives. Finite sample performance of the proposed test statistic is compared with other recently developed tests designed to also handle the "large p, small n" situation through simulations. The proposed test achieves competitive rates for both type I error rate and power. The usefulness of our test is illustrated by applications to two microarray gene expression data sets

## Authors

• 2 publications
• 4 publications
• 1 publication
• ### Comparing a Large Number of Multivariate Distributions

In this paper, we propose a test for the equality of multiple distributi...
04/11/2019 ∙ by Ilmun Kim, et al. ∙ 0

• ### A More Powerful Two-Sample Test in High Dimensions using Random Projection

We consider the hypothesis testing problem of detecting a shift between ...
08/11/2011 ∙ by Miles E. Lopes, et al. ∙ 0

• ### High-dimensional Two-sample Precision Matrices Test: An Adaptive Approach through Multiplier Bootstrap

Precision matrix, which is the inverse of covariance matrix, plays an im...
10/21/2018 ∙ by Mingjuan Zhang, et al. ∙ 0

• ### A Distribution-Free Test of Independence and Its Application to Variable Selection

Motivated by the importance of measuring the association between the res...
01/31/2018 ∙ by Hengjian Cui, et al. ∙ 0

• ### Adjusted chi-square test for degree-corrected block models

We propose a goodness-of-fit test for degree-corrected stochastic block ...
12/30/2020 ∙ by Linfan Zhang, et al. ∙ 0

• ### Domination Number of an Interval Catch Digraph Family and its use for Testing Uniformity

We consider a special type of interval catch digraph (ICD) family for on...
01/08/2020 ∙ by Elvan Ceyhan, et al. ∙ 0

• ### The Shrinkage Variance Hotelling T^2 Test for Genomic Profiling Studies

Designed gene expression micro-array experiments, consisting of several ...
12/07/2017 ∙ by Grant Izmirlian, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Due to the advancement of technologies, researchers in various fields of sciences often encounter the analysis of massive data sets to extract useful information for new scientific discoveries. Many of these data sets contain a large number of features , often exceeding the number of observations. Such data sets are common in the fields of microarray gene expression data analysis [1, 2]; medical image analysis [3], signal processing, astrometry and finance [4]. Analysis of such high dimension, low sample size data sets present a substantial challenge, known as the “large , small ” problem. One of the most prominent problems that researchers are interested in is making inferences on the mean structure of the population. However, many well known classical multivariate methods cannot be used to make inferences on the mean structures for large small

data. For example, because of the singularity of the estimated pooled covariance matrix, the classical Hotelling’s

statistic [5] breaks down for the two sample test when the dimension of the data exceeds the combined sample size.

Over the last few years, researchers have turned their attention to developing statistical methods that can handle the large small problem. A series of important works have been done on the two-sample location problem in the large small setup, and several parametric and non-parametric tests have been developed. Most of these prior works are primarily motivated by creating methodologies that avoid the issues that Hotelling’s -statistic faces in the large small scenario, particularly the singularity of the pooled sample covariance matrix. An early important work in this direction was developed by Bai and Saranadasa [6] where they considered the squared Euclidean norm between sample means of the two populations as an alternative to the Mahalanobis distance used in Hotelling’s -statistic. However, one of the criticisms (see [7]

) of this method is the assumption of equal variance structures for

the two populations, which is hard to verif y Addressing this shortcoming, Chen and Qin [7] suggest ed an alternative method that does not assume the equality of variance-covariance matrices and removes the cross-product terms from the square d Euclidean norm of difference of sample means between the two populations. Another method proposed by Srivastava [8] replaces the inverse of the sample covariance matrix in Hotelling’s -statistic with the inverse of the diagonal of the covariance matrix. Modifications to this work [9, 10] have relaxed some of the assumptions, resulting in relatively similar test statistics with improved performance. Recently, Gregory et al. [11] suggest ed a test procedure, known as generalized component test (GCT), that bypasses the full estimation of the covariance matrix by assuming that there is a logical ordering among the components in such a way that the dependence between any two components is related to their displacement. In a development of another direction, Cai et al. [4]

suggest a test procedure based on linear transformation of the data by a precision matrix, where the test statistic is the sup-norm of marginal

-statistics of transformed data. Tests developed in [6, 7, 8, 9, 11] are designed for testing dense but possibly weak signals, i.e., there are a large number of small non-zero means differences. Test suggested in [4] is designed for testing sparse signals, i.e., when there are a small number of large non-zero means differences. The implementation of test [4] requires sparsity structure assumptions on the covariance or precision matrix, which may not be satisfied in real applications. Furthermore, it also requires an estimate of the precision matrix, which is time-consuming for large , see [12].

Driven by the above two concerns of the test [4], we reconsider the problem of testing two mean vectors under sparse alternatives. We propose a new test based on the prepivoting approach. The concept of prepivoting is introduced by Beran [12]

. Prepivoting is a mechanism that transforms a root, a function of samples and parameters, by its estimated distribution function to a random variable, known as prepivoted root. The characteristics of the distribution of a prepivoted root are very similar to that of a uniform distribution on (0,1). Thus, the distribution of the prepivoted root is less dependent on population parameters than the distribution of the root. Consequently, approximate inference based on the prepivoted root is more accurate than approximate inference based on a original root. Pre-pivoting has not been considered previously in high-dimensional testing problems. A snapshot of the development of our proposed test method is as follows: we construct prepivoting marginally for all the

variables individually using an asymptotic refinement, particularly the Egdeworth exapnsion. Then the marginal prepivoted roots are combined using the max-norm.

The rest of paper is organized as follow. In Section 2, we formulate the construction of the suggested test method for the hypothesis testing problem (1) based on the prepivoting technique technique and study the limiting distribution null distribution of the test statistic. In this section, we also analyze the power of the suggested test. Simulation studies are presented in Section 3 and Section 4 presents application of the proposed test to two microarray gene expression data sets. We finally conclude the article with some brief remarks in Section 5. Proofs and other theoretical details are relegated to the Appendix.

## 2 Methodology

Let and be independently and identically distributed (i.i.d.) samples drawn from two variate distributions and , respectively. Let and denote the mean vector and covariance matrix of respectively; and denote the mean vector and covariance matrix of respectively. The primary focus is on testing equality of mean vectors of the two populations,

 H0:μx=μyvs.H1:μx≠μy. (1)

The proposed method for the two sample mean vector test (1) adopts the prepivoting technique introduced by Beran [12]

. This technique is useful to draw more accurate statistical inferences in the absence of a pivotal quantity. Prepivoting technique transforms the original root, can be functionally dependent on data and parameters, into another value through its estimated cumulative distribution function (cdf). The new value is known as prepivoted root or prepivot.

### 2.1 Overview On Prepivoting

For the sake of simplicity, we firstly introduce the concept of prepivoting. Consider a statistical model , where is unknown and we are interested in the hypothesis test, vs. . Let be a random sample from . In frequentist inference problem, one often compares the observed value of a root, , under

to the quantile of that root’s null distribution. We reject

at a significance level whenever is greater than , is cdf of the root under . Unless is a pivotal quantity, depends on . If is not a pivotal quantity, then its null distribution can be derived only through . Thus a hypothesis test based on a non pivotal quantity is not exact. Furthermore, the actual level of a test based on a non pivotal root differs from the nominal level.

In such cases, the prepivoting is a useful technique to reduce errors associated with inference. The main idea behind Beran’s prepivoting technique is to transform the original root to a new root whose sampling distribution is “smoother” in the sense that its dependence on the population distribution is reduced, where is an estimate of .

If the distribution is unknown, then prepivots are constructed based on the empirical distribution function . Let denote the bootstrap estimate of , and be a random sample from . The prepivot is given by , and it has been established [12] that is closer to being pivotal than the original root or more precisely an approximately U(0,1) random variable. Let denote the bootstrap estimate of , the cdf of . The statistical test based on prepivoting rejects at level if . Prepivoting usually is accomplished by a Monte Carlo simulation ([12, 13]) known as nested double-bootstrap algorithm. This algorithm consists of an outer and an inner level of non -parametric bootstrap, which give the estimated cdfs and , respectively. However, prepivoting has been criticized for its computational cost due to the nested double-bootstrap algorithm ([14]; [15]).

### 2.2 Proposed Test Method

To facilitate the construction of the proposed test, consider the elements of the random vectors, and , . As a procedure for the two-sample mean vector test given in (1) based on prepivoting, consider the root

 Ri=Ri(Xi,Yi,μxi,μyi)=|(¯Xi−¯Yi−(μxi−μyi)|√^σ2xi/n+^σ2yi/m,i=1,…,p; (2)

where - is the component of

, which is an unbiased estimator for the

element of , viz. . The quantities and are plug -in estimators of diagonal element of and , respectively. Let denote the version of under . Then , .

Here, we are interested for detecting relatively sparse signals so statistics of the maximum-type such are more useful than sum of squares-type statistics (see [4]). Thus, at first glance, the test statistic is a potential candidate for the testing problem 1. Under normality, the roots follow distribution with degrees of freedom under , provided . For unequal variances, we encounter the Behrens-Fisher problem and even under normality, the roots are non-pivotal. For any general distribution, deriving a pivotal root is highly infeasible and hence the corresponding inferences are approximate inferences. To overcome the drawback of non-pivotal roots, the roots can be pre-pivoted using its empirical distribution for reducing errors involved in approximate inferences.

Suppose that denotes the cdf of , whic is generally unknown. Let denote the bootstrap estimator of . Instead of ’s, we consider the prepivots or the transformed roots to construct a test for two-sample mean vectors test. As discussed above in Subsection 2.1, is a better pivot than the original root .

Our test procedure is based on the intuition that if only a very few components of and are unequal, then a test based on the max-norm should detect that and are different with greater power than tests based on sum-of-squares of sample mean differences. With this intuition, a natural choice is the test based on the maximum norm of . Thus, we define the test statistic for the two-sample mean vector test as

 T=max1≤i≤p^Ji(R0i). (3)

The main advantage of over is that is less dependent on the characteristics of the population distributions and . Due to the unavailability of the sampling distribution of , we can approximate the critical value or the p-value of a test based on by means of nonparamteric bootstrap, where is the CDF of . This approach can be challenging because the computational complexity grow s with the dimension at a linear rate. To avoid such computational burden, we replace each by an analytical approximation based on Edgeworth expansion theory.

### 2.3 Edgeworth expansion based prepivots and the suggested test statistic

The Edgeworth expansion is a sharpening of the central limit approximation to include

higher-order moment

terms involving the skewness and kurtosis of the population distribution. The development of the Edgeworth expansion begins with the derivation of series expansion for the density and distribution functions of normalized

i.i.d. sample means [16]. A rigours treatment of these expansion was considered by Cramér’s [17]. Later on, Bhattacharya and Ghosh ([18]) developed the Edgeworth expansion theory for a smooth function of i.i.d. sample mean vectors.

To construct Edgeworth expansion based prepivots, it is necessary to obtain the Edgeworth expansion of , the CDF of . Theorem 2.1 provides the such asymptotic expansions. Throughout this paper, we assume that

• , and . That is, marginal eighth order moments of the random vectors and are uniformly bounded.

• For each , the distributions of and satisfy bivariate Cramér’s continuity condition ([18]; pp.66-67 in [19]).

• as .

###### Theorem 2.1.

Under the assumptions (A1), (A2), and (A3), the CDF of has the following Edgeworth expansion

 P(Ri≤x)=2Φ(x)−1+2N−1qi(x)ϕ(x)+o(N−1), (4)

holds uniformly in any ; where

 qi(x)=x[112η−21,iη3,i(x2−3)−118η−31,iη22,i(x4+2x2−3)−14η−21,i{η4,i(x2+3)+2σ2x,iσ2y,ir2xr2y}], η1,i=σ2x,irx+σ2y,iry,η2,i=γx,ir2x−γy,ir2y,η3,i=κx,ir3x+κy,ir3yandη4,i=σ4x,ir3x,i+σ4y,ir3y,i.

Proof. See Appendix

In Theorem 2.1, and denote the skewness of the component of and , respectively and and denote the kurtosis of the component of and , respectively; where

 γx,i = E(Xi−μx,i)3,γy,i=E(Yi−μy,i)3, κx,i = E(Xi−μx,i)4−3σ4x,iandκy,i=E(Yi−μy,i)4−3σ4y,i.

The bootstrap estimators have similar asymptotic expansions as given in Theorem 2.1 but in their expansion the population moments are replaced with the corresponding sample moments. Sample analogues of asymptotic expansions in Proposition 2.1 are

 ^Ji(x) = 2Φ(x)−1+2N−1^qi(x)ϕ(x)+op(N−1), (5)

for any , where are the sample versions of . The asymptotic expansions in (5) suggest the following second-order analytical approximations to prepivoted test statistics :

 ~Ji(R0i)=2Φ(R0i)−1+2N−1^qi(R0i)ϕ(R0i). (6)

Thus, (6) indicates that the corresponding analytical approximation to can be expressed as

 T′=max1≤i≤p~Ji(R0i)=max1≤i≤p{2Φ(R0i)−1+2N−1^qi(R0i)ϕ(R0i)}.

The approximation is a better choice than since it is computationally more attractive. However, it may be difficult to obtain the distribution for . One approach is to use the bootstrap to estimate the CDF . To avoid the Monte Carlo approximation associated the bootstrap technique, we consider a transformation-based approach. The idea is as follows.

If a monotone transformation of has a limiting distribution, then we can use that transformation to develop the rejection region of the test. Due to the monotonicity of the aforementioned transformation, the inference based on or a monotone transformation of will be same. Let

denote the inverse function of the cdf of a standard normal distribution. Theorem

2.2 shows that

 T′′=[max1≤i≤pΦ−1(~Ji(R0i))]2−2log(p)+loglog(p) (7)

has an extreme value distribution as . Thus, is better choice than at least from the computational cost perspective.

To obtain the limiting distribution of , we introduce two more assumptions which are similar to the assumptions of Lemma 6 of [4]. Expression (A.5) in the Appendix shows that , where goes to

in probability faster than

. The probability integral transformation implies that under , and further jointly follow as a dimensional multivariate normal distribution with zero mean vector and covariance matrix (see [20]). is the Pearson correlation between and . For fixed , converges weakly to based on a multivariate version of Slutsky Theorem. Thus, to establish the the weak convergence of when , assumptions (A4) and (A5) are imposed on the , the covariance matrix of the random vector .

• for some . (A4) is mild since would imply that the is singular.

• for some .

###### Theorem 2.2.

Let . Then under the assumptions (A1)–(A5), for any ,

 P{T′′≤x}→exp{−12√πexp(−x/2)}asN→∞.

Proof See Appendix

The limit distribution is type I extreme value distribution. On the basis of the limiting null distribution, the approximate p-value is when grows at a rate smaller than . Based on Theorem 2.2, we can also construct asymptotically

-level test that rejects the null hypothesis in (

1) if , where .

Our test for small sample sizes and large dimension is in the same spirit as the test proposed by Cai et al. [3] for analysis of rare signals. Their test involves estimation of the inverse of covariance matrices. Matrix inversion it is time-consuming when is large, whereas our test is computationally more efficient.

We now analyse the power of the test. To study the asymptotic power, we consider a local alternative condition that the maximum absolute value of the standardized signals is higher in order than that is

 √log(p2/log(p))=o(max1≤j≤p|μxj−μyj|√σ2xj/n+σ2yj/m). (8)

Theorem 2.3 shows that if (8) holds, then the power of the test, , converges to 1 as .

###### Theorem 2.3.

Under the assumptions of Theorem 2.2, if and satisfy the condition in (8), then

 P(T′′>qα|H1)→1.

Proof See Appendix

Theorem 2.3 indicates that the proposed test procedure should detect the discrepancy between two mean vectors , and with probability tending probability to 1 if the maximum of the absolute standardized signals is of order higher than as . In comparison, Cai et al.’s test [4], which is based on a supremum-type test statistic, has an asymptotic power 1 provided the maximum of the absolute standardized signals is of higher order than , for a certain unknown constant . The asymptotic powers of sum-of-squares-type tests (Bai and Saranadasa [7]; Srivastava and Du [8]; Chen and Qin [9]) converge to 1 when as . The asymptotic power of GCT (Gregory et al. [12]) converges to 1 provided as .

To better understand the asymptotic power of the proposed test in comparison to the other tests, consider the following scenario. Suppose that all the signals are of equal strength , have unit variances and equal sample sizes for the two groups, . Under the alternative, let denote the set of locations of the signals, and let the cardinality of be , where . Under this scenario, power of the sum-of-squares-type tests converge to 1 as if , that is if . Similarly, the power of the GCT test converges to 1 if , as . On the other hand, the power of our suggested test procedure goes to 1 as if . Hence the power of sum-of-squares-type tests depend on how large compared to . In comparison, the power of our proposed test depends on how large compared to , which has a much smaller rate of increase compared to . Under this particular example, the sum-of-squares-type tests are less powerful when compared to . Extending further, we may say that the our test procedure is asymptotically more powerful than the sum-of-squares-type tests with .

## 3 Simulation Study

In this section we present empirical results based on simulated data. We compare the empirical type I error rate and power of the proposed test procedure, denoted by PREPR, under different models with the following test methods: BS (Bai and Saranadasa [7]), SD (Srivastava and DU [8]), CQ (Chen and Qin [9]), CLX (Cai et al. [4]), GM (Karl et al.[12]), GL and GM (Karl et al.[12]). GM and GL are two variants of [12]: moderate-p, and large-p. Following [12], we chose the Parzen window with lag window size for both GM and GL. The number of variety of multivariate distributions and parameters are too large to allow a comprehensive, all-encompassing comparison. Hence we have chosen certain representative examples for illustration. We considered two setups to generate the data.

### 3.1 Simulation setup 1

In setup 1, we considered two scenarios for the marginal distributions of the elements of the random vectors and . In Scenario 1, each of the components of the random vectors and follow a standard normal distribution. In Scenario 2, each of the components of the random vectors and follow a centralized gamma(2,1) distribution.

To impose a dependence structure on joint distributions of

and , we used a factor model as described in [9]. Simulation setup 1 considers variances for the all components of and . Under each of the above two scenarios, we consider three dependence models to generate correlated observations:

• (M1) Model 1 considers a moving average process of order 10. The moving coefficients vector was a normalized vector of dimension 10, where the components of that vector were generated independently from Uniform (2, 3) and were kept fixed once generated through our simulations.

• (M2) In Model 2, we consider long-range (LR) dependent structure that was also used in [12]. The self similarity parameter of the LR dependent structure was set at 0.625. The algorithm to generate LR dependent samples can be found in [21].

• (M3) In Model 3, we set , where and is the all ones matrix.

We considered three values for the dimension: and . For each combination of scenario (1 or 2), model (M1/M2/M3) and , the empirical type I error and power were computed based on 2000 randomly generated data sets. The empirical powers were computed on against the numbers of signals , where we considered and . The parameter denotes the sparsity of the signal. The sample sizes were set as and . In all the scenarios, we set , where and denotes a -vector with entries equal to 0. In setup 1, we computed powers for . For the empirical type I error calculation, we assumed . Without loss of generality, we set , the mean vector of as the zero vector for all simulations.

### 3.2 Simulation setup 2

In Simulation setup 2, we use the same setup as in Simulation setup 1, except that we now use covariance matrices with unequal variances are described in (M4) and (M5).

• (M4) Model 4 considers , where D is a diagonal matrix with with diagonal elements =Unif(1,5), , , and if Such a covariance matrix was also used in [3].

• (M5) In Model 5, , where D is a diagonal matrix with diagonal elements =Unif(1,5), and has a long-range dependence structure with diagonal that are all ones. has the similar structure as the the (M2) in Simulation setup 1. describe the structure

### 3.3 Discussion the results from the simulation

Results from Simulation setup 1 presented in Tables 1-3 display the empirical type I errors of the new test PREPR along with the tests BS, CQ, GL, GM, and CLX. These tables show that empirical type I errors of new test PREPR are close the nominal size 0.05 under the all scenarios and all the models, and they range from 0.032 to 0.051. The type I errors of tests BS and CQ are comparable. In some cases, they are liberal with values ranging from 0.041–0.071 and 0.043–0.070 respectively. Test SD seems to be conservative with increase , and its empirical type-I errors vary from 0.012 to 0.05. Whereas the tests CLX, GL and GM are generally liberal under all scenarios and models with their empirical type I errors ranging from 0.056–0.142, 0.059–0.191 and 0.051–0.165 respectively. Overall, in all of the considered scenarios and models under the simulation setup 1, the proposed test PREPR outperforms BS, CQ, SD, GM, GL and CLX in terms of size control as indicated by its better type I error accuracy.

Empirical type I errors corresponding to the Simulation setup 2 are presented in Tables 8-9. These tables show that type I errors of these tests are consistent with their empirical type I errors reported in Simulation setup 1 For example, the type I errors of PREPR range from 0.034–0.055, whereas those of BS,CQ, SD, GM, GL, and CLX range from 0.047–0.078, 0.048–0.078, 0.024–0.050, 0.059–0.088, 0.057–0.116 and 0.066–0.115, respectively.

The empirical powers of different tests under Simulation setup 1 are presented in Tables 4-7. Tables 4 and 5 report the powers for , and tables 6 and 7 report the powers for and respectively. The tables show that PREPR has more power overall than the tests BS, SD, CQ, GM, and GL when the signals are sparse - , and . As the signals become less sparse at , the tests BS, SD, and CQ are more powerful than PREPR, particularly when the signals are moderately strong with . Though tests GM and GL are liberal, tables 4-7 display that GM and GL have the smallest power in most cases. The test CLX enjoys the maximum power in all considered scenarios and models under the Simulation setup 1, irrespective of whether signals are sparse or dense. Such performance of CLX is not surprising since it showed inflated type I errors. Tables 10-11 display empirical powers corresponding to Simulation setup 2. From these tables, we can observe that comparisons are similar to the cases that are reported Simulation setup 1.

In summary, based on the numerical results we have performed that PREPR shows better control on the type I error than the remaining tests, and is more powerful than sum-of-squares tests BS, SD, CQ, GL, and GM against the sparse alternatives.

## 4 Data Analysis/Application

We considered an application of PREPR test for analyzing gene expression data in terms of gene sets. A gene-set analysis refers to a statistical analysis to identify whether some functionally pre-defined sets of genes express differently under different experimental conditions. A gene-set analysis is the beginning for biologists to understand the patterns in the differential expression. There are two major categories of a gene-set test: competitive gene-set test and self-contained gene-set test ([22]). The first one tests a set of genes, say , of interest with complementary set of genes which are not in . Self-contained gene-set test considers the null hypothesis that none of the genes in are differentially expressed. The proposed two sample test PREPR is applicable to the self-contained gene-set test. We apply