A rank-based Cramér-von-Mises-type test for two samples

02/18/2018 ∙ by Jamye Curry, et al. ∙ The University of Mississippi Georgia Gwinnett College 0

We study a rank based univariate two-sample distribution-free test. The test statistic is the difference between the average of between-group rank distances and the average of within-group rank distances. This test statistic is closely related to the two-sample Cramér-von Mises criterion. They are different empirical versions of a same quantity for testing the equality of two population distributions. Although they may be different for finite samples, they share the same expected value, variance and asymptotic properties. The advantage of the new rank based test over the classical one is its ease to generalize to the multivariate case. Rather than using the empirical process approach, we provide a different easier proof, bringing in a different perspective and insight. In particular, we apply the Hájek projection and orthogonal decomposition technique in deriving the asymptotics of the proposed rank based statistic. A numerical study compares power performance of the rank formulation test with other commonly-used nonparametric tests and recommendations on those tests are provided. Lastly, we propose a multivariate extension of the test based on the spatial rank.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

To test whether two samples come from the same or different populations, several distribution free tests such as the Kolmogorov-Smirnov test, the Cramér-von Mises test and their variations have been proposed and widely used. Let and be two independent random samples with continuous distribution functions and , respectively. The two sample problem is to test

(1)

Denote and as the empirical distribution functions of the two samples and as the empirical distribution function of the combined sample, where . The Kolmogorov-Smirnov (KS) two-sample test uses the maximum distance (difference) between and . The classical Cramér-von Mises test statistic has the form

(2)

This test statistic and its asymptotics have been well studied in the literature, for example, Lehmann [22], Rosenblatt [26], Darling [9], Fisz [14] and Anderson [2].

Both of the KS test statistic and the Cramér-von Mises test statistic are formulated based on the empirical distributions. Székely and Rizzo [29], Baringhaus and Franz [3] studied a test statistic based on the original data. That is

(3)

This test has a direct generalization to the multivariate case. However, it requires an assumption on the first moment and it is not distribution free for the univariate case. It is worth to note that the test statistic (

3) falls in the unified framework on energy statistics studied by Székely and Rizzo [30, 31] and can be easily generalized to the sample problem. Other similar tests include [13] and [16], although they are derived under different motivations. Fernandez, Gamerro, and Garcìa [13]

developed a statistic based on the empirical characteristic functions of the observed observations. The statistic uses a weighted integral of the difference between the empirical characteristic function of the two samples. Gretton et al.

[16] proposed a test based on a kernel method in which the testing procedure is defined as the maximum difference in expectations over functions evaluated on the two samples. All of those test statistics are of the form being a difference on a measure of between-group and within-group.

In this paper, we propose a new rank based test of the same form. Nevertheless, it overcomes the limitations of (3). It is formulated based on the ranks of two samples with respect to the combined sample . Denote as the standardized rank of the quantity with respect to the distribution , i.e., . For testing the hypothesis (1), we use the following test statistic.

(4)

is interpreted as the difference of the average of between-group rank differences and the average of within-group rank differences. A large value of indicates the deviation of two groups. The test based on is distribution-free and does not require any moment condition.

For the balanced samples (), one can consider an equivalent but simpler statistic

(5)

is the average of rank differences between two groups. and are equivalent because when .

As we will see later, the test statistic is closely related to the classical nonparametric Cramér-von Mises criterion . They are different empirical plug-in versions of the same population quantity. The rank based test statistic and the Cramér-von Mises criterion may not be equal to each other for finite samples, but they are asymptotically equivalent. The advantage of the new rank based test over the classical one is its ease to generalize to the multivariate case. Multivariate generalizations of Cramér-von Mises tests have been considered by many researchers, but they are either applied on independent data [8] or used for testing independence [15]

or used in the goodness-of-fit test of the uniform distribution on the transformed data

[7]. For the rank based formulation, generalizations to the multivariate two sample problem are straightforward by applying notions of multivariate rank functions. In this paper, rather than using the empirical process approach, we provide a different easier proof, bringing in a different perspective and insight. In particular, we apply the Hájek projection and orthogonal decomposition technique in deriving the asymptotics of the proposed statistic.

Some related works include Pettitt [25] and Baumgartner, Weiß, and Schindler [4]. They considered statistics of Anderson-Darling type that can be viewed as standardized versions of Cramér-von Mises statistics. Schmid and Trede [27] utilized Cramér-von Mises statistics. A rank-based representation of a Cramér-von Mises statistic under a balanced size and its generalizations are studied by Borroni [5]. Albers, Kallenberg and Martini [1] studied rank procedures for detecting shift alternatives with increasing shift in the tail of the distribution. Janic-Wróblewska and Ledwina [21] considered a test based on a combination of several linear rank statistics. Related to the rank procedures, other nonparametric tests include those based on the empirical likelihood approach. Einmahl and McKeague [12] considered test statistics based on the empirical likelihood ratios for the goodness of fit and two sample problems. It has been proved that those tests are asymptotically equivalent to the one-sample and two sample Anderson-Darling tests. Cao and Van Keilegom [6]

proposed an empirical likelihood ratio test via kernel density estimation. Gurevich and Vexler

[17] utilized an empirical likelihood ratio test based on samples entropy.

The paper has the following structure. Section 2 presents the main results, including the formulation of the test statistic and its properties. The simulation study is performed in Section 3. We propose a multivariate extension of the test in Section 4. We summarize and conclude the paper in Section 5. All proofs go to Section 6.

2 Main Results

To formulate the rank based test statistic in (1), we first establish its population version. We provide a result of the population version, from which we can see the relationship between our statistic and Cramér-von Mises criterion.

Theorem 2.1

Let and

be independent continuous random variables distributed from

and , respectively. Let with be the mixture distribution. Then

(6)

and the equality holds if and only if .

The above result is based on the following identity which is obtained from Lemma 6.1 in the Appendix.

(7)

The result of Theorem 2.1 suggests two possible statistics for testing the hypothesis (1). The first version is the sample plug-in version of the left side of (7). With and multiplying by , it is our test statistic defined in (1). is rejected if the sample version is large, i.e., . The critical value is determined by the significance level and the null distribution of . The test statistic is the difference of the average of between-group rank differences and the average of within-group rank differences. A large value of indicates the deviation of two groups.

The two-sample Cramér-von Mises statistic in (2) is the empirical version of the right side of (7). Hence and are all plug-in statistics of an equal quantity. Nevertheless, they may take different values. We shall thank one of the referees who pointed out this possibility. For example, in the case that , let the two realizations be and and the two realizations be and . It is easy to see that the Cramér-von Mises statistic has value and the test statistic has value . Next, we will study the properties of .

Let be , and . Then we have the following theorem.

Theorem 2.2

For , if , then

By this theorem and Theorem 2.1, it is easy to see that our test statistic is consistent for the alternative .

Theorem 2.3

Under , is distribution free.

Under , the combined samples constitute a random sample of size from the distribution . So any assignment of numbers to and numbers to from the set of integers

is equally likely, i.e. has probability

which is independent of . Using the fact that those number assignments have one-to-one linear relationship with the standardized ranks, is distribution free.

Figure 1: The exact null distribution of obtained from all combinations (Left: , ; Right: , ). The vertical line in each graph indicates the 5% critical value.

The exact null distribution of can be found by enumeration of all possible values of by considering the orderings of ’s and ’s. Figure 1 provides the exact null distribution of for sample sizes and by considering all combinations. However, the exact null distribution is infeasible to obtain for large sample sizes because the number of combinations increases dramatically as and increase. For large samples, we can use Monte-Carlo method on all combinations to approximate the null distribution. Also the limiting distribution of can be used to determine the critical values of the test. Next we study asymptotic behaviors of .

Since and the Cramér-von Mises statistic

are different sample plug-in versions of a same quantity for checking the equality of two distributions, we expect that they should have the same expectation, variance and asymptotic distribution under the null hypothesis. The expectation and variance of

are provided by Anderson [2]. In the next theorem, we obtain the same results for , but provide a more straightforward derivation and simpler proof.

Theorem 2.4

Under ,

and

Remark 2.1

In particular, if , and .

Rosenblatt [26] and Fisz [14] have derived the limiting distribution of Cramér-von Mises statistic, which is a mixture of independent distributions. It is necessary to check whether or not our test statistic has the same limiting distribution. Rather than using a stochastic process method, we provide a different Hájek projection approach to obtain the limiting distribution of , which agrees with that of .

We obtain the first order Hájek projection in Lemma 6.2 as

where . Under , has variance

Clearly, as . Therefore the first order Hájek projection is not sufficient in deriving the asymptotics of the statistic .

To derive the asymptotics of the statistic under the null hypothesis, it is necessary to have the second order projection of .

Since

. By Lemma 6.3 and Lemma 6.4, it can be examined that

where and are three different variables from , , and

Then as under the condition . We shall always assume this condition in the following analysis.

Efron and Stein [11] discussed a general orthogonal decomposition of a statistic. Here, our statistic is decomposed as , where is the first order projection and is a negligible term. Hence the limiting distribution of is determined by the limiting distribution of .

To determine the limiting distribution of under , let . Then is a degenerate kernel function since is symmetric and . By Lemma 6.3 and Lemma 6.4, we have with as  111If , . and

It is not difficult to verify that

(8)

and , i.e., and are orthogonal, where and are three different variables from , .

Now we define an operator on the function space by

This operator only has real eigenvalues since the kernel

is symmetric. Let be the non-zero eigenvalues of the operator obtained by solving the equation . With the substitution of and , solving is equivalent to solve that

(9)

where . Taking the twice derivative with respect to on both sides of (9), we have the equation . Solving it and substituting back, we have the eigenvalues of being

and the corresponding eigenfunctions

. The eigenfunction for the zero eigenvalue is . Note that the eigenvalues do not depend on , but the eigenfunctions depend on , which give a orthonormal basis for the space . Let . Then we have the following theorem.

Theorem 2.5

Under and the condition ,

where are independent variables and . Hence

since in probability.

As expected, this asymptotical result agrees with the one for Cramér-von Mises statistic as proved with a stochastic process method in Rosenblatt [26] and Fisz [14]. This different projection approach we applied here is typically useful in U-statistic theory, but we shall emphasize that is not an U-statistic.

Variance ratio 0.9239 0.9819 0.9967 0.9997 1.0000

95% quantile

1.9298 1.9676 1.9772 1.9779 1.9780
Approximated of () based on
0.4545 0.4601 0.4617 0.4617 0.4617
, 0.4545 0.4601 0.4615 0.4616 0.4616
0.4544 0.4600 0.4614 0.4615 0.4615
0.4543 0.4597 0.4610 0.4611 0.4611
, 0.4540 0.4594 0.4608 0.4609 0.4609
Table 1: First part: variance ratios of over and quantiles of . Second part: approximated critical values for . Comparing with the exact critical value 0.4643 for the case of and the exact critical value 0.4678 for the case of , the approximations are pretty accurate even under small sizes. In practice, or is recommended.

In practice, we may approximate the limiting distribution by a distribution of a finite linear combination of independent random variables, i.e.

The accuracy of approximation depends on the choice of . Table 1 provides ratios of variance of the mixture and that of the infinite mixture, that is, . Also the table lists 95% quantiles of which are estimated by the average of 10 sample quantiles each on random samples. Those quantile values can be used to approximate the critical values of , which are given by the second part of Table 1. As we will see that even for small sample sizes, the approximated critical values are pretty accurate and close to the exact true values. For the case of , the true size of the test is 0.056 if the approximated critical value 0.4611 is used. For the case of and , the true size of the test is 0.052 if 0.4609 is used. In summary, or is recommended for a compromise between computation and accuracy.

3 Simulations

By the simulation study in this section we demonstrate the performance of the T test. There are many nonparametric tests available for the two sample problem. It is by no means to conduct a comprehensive comparison. Here we include Kolmogorov-Smirnov test (KS), Wilcoxon rank sum test (W) or Mood test (M), the empirical likelihood ratio test (ELR) proposed by Gurevich anf Vexler

[17], the empirical likelihood test (ELT) proposed by Einmahl and McKeague [12], Baringhaus and Franz’s Cramér test (CT), the test studied in Fernándes et al. [13] (DT) in the study. It is necessary to note that the CT and DT tests are not distribution-free tests, and their critical values and p-values are based Monte-Carlo method on permutations in each sample, which is implemented in the R package “cramer”. The R package “dbEmpLikeGOF” is used for the ELR test in which the parameter is set to be 0.1 as suggested in [17]. The critical values of the ELT and our T test are computed through random combinations on .

KS W ELR ELT CT DT T
0 0.040 0.050 0.057 0.050 0.050 0.051 0.050
0.041 0.047 0.031 0.047 0.048 0.048 0.049
0.25 0.162 0.228 0.182 0.224 0.226 0.191 0.217
0.160 0.208 0.119 0.196 0.203 0.171 0.198
0.5 0.534 0.681 0.578 0.670 0.671 0.582 0.652
0.498 0.621 0.446 0.603 0.615 0.526 0.600
0.75 0.875 0.949 0.902 0.945 0.943 0.901 0.936
0.851 0.926 0.829 0.919 0.922 0.871 0.912
1 0.988 0.998 0.994 0.997 0.998 0.991 0.996
0.979 0.995 0.976 0.994 0.994 0.984 0.993
Table 2: Power performance of each test with significance level

for the normal distribution with location alternatives. Row 1:

, Row 2:

Various alternative distributions are considered. For each case, iterations are computed to estimate powers by calculating the fraction of p-values less than or equal to the level of significance. The Monte Carlo errors can be estimated by . In particular, the size of tests shall maintain in the interval (0.046, 0.054).

KS W ELR ELT CT DT T
0 0.036 0.048 0.052 0.048 0.049 0.046 0.047
0.045 0.054 0.036 0.054 0.055 0.051 0.054
0.25 0.130 0.163 0.124 0.160 0.158 0.142 0.165
0.135 0.157 0.084 0.152 0.154 0.137 0.162
0.5 0.422 0.488 0.367 0.486 0.481 0.434 0.501
0.402 0.449 0.268 0.440 0.439 0.394 0.462
0.75 0.776 0.830 0.710 0.825 0.818 0.783 0.836
0.741 0.786 0.586 0.780 0.776 0.734 0.799
1 0.947 0.966 0.917 0.966 0.965 0.950 0.971
0.929 0.946 0.842 0.946 0.944 0.925 0.954
Table 3: Power performance of each test with significance level for the with location alternatives. Row 1: , Row 2: .

Table 2 shows the size and power performance for each test under the normal distributions, where ,, and ,, with 0, 0.25, 0.5, 0.75, and 1. When , the KS test is undersized for both the equal and unequal sample sizes cases; the ELR test is oversized in the equal sample size case and seriously undersized for the sample unequal size case; all other tests keep a desirable size. As expected, the W test is the best among all tests since it is well-known to be powerful for the two-sample problem with a constant shift in location, especially when data follow logistic or normal distributions. The CT and ELT tests are comparable to W. The T test is more powerful than the DT, KS and ELR tests. In the unequal sample size case, the W test is the best followed by the CT test. The ELT and T tests are comparable and significantly better than the DT, KS and ELR tests.

The experiment is repeated for the

-distribution with 3 degrees of freedom and the result is presented in Table

3. Although the statistical power of the test is the highest among all tests for all cases, its power differences with the W test or the ELT test are small so that those three tests are comparable.

KS W ELR ELT CT DT T
0 0.040 0.052 0.058 0.051 0.052 0.050 0.051
0.044 0.050 0.032 0.050 0.050 0.053 0.052
0.25 0.417 0.443 0.906 0.599 0.205 0.287 0.472
0.377 0.405 0.815 0.516 0.186 0.265 0.428
0.5 0.960 0.886 0.999 0.978 0.655 0.843 0.968
0.940 0.859 0.998 0.958 0.598 0.798 0.948
0.75 0.999 0.989 1.000 0.999 0.945 0.993 0.999
0.998 0.980 1.000 0.998 0.908 0.988 0.997
1 1.000 0.999 1.000 1.000 0.993 1.000 1.000
1.000 0.998 1.000 1.000 0.988 1.000 1.000
Table 4: Power performance of each test with significance level for the Pareto distributions with location alternatives. Row 1: , Row 2: .

Table 4 shows the power performance for the Pareto distribution, where ,, Pa(2, 2) and ,, Pa(2+, 2) are generated, with 0, 0.25, 0.5, 0.75, and 1. The power of the ELR test is much higher than that of all others. For , the power of the ELR test is as high as 90%, which is 30% higher than the second best ELT test. The T test is the third best one. The power difference between the test and that of the test can be as large as 27% for equal sample sizes and can be as large as 32% for unequal sample sizes.

All considered tests as in the experiment for location alternatives are used for scale alternatives except the Wilcoxon test (W), as this is a test for location. Instead, the Mood’s test known as a scale test is used and referred to as the M test. Table 5 displays the results when samples of size 50 are generated from or Pareto, where 1, 1.5, 2, 2.5, and 3. In the normal case, the T test does not compare favorably to all considered tests other than the KS test. It performs significantly better than the KS test, but its power is 2-5 times smaller than that of others. It is interesting to see that the M test outperforms all tests in the normal case but it is the inferior in the Pareto case. The T test has better performance for Pareto samples than for normal samples due to the heavy tails in Pareto distributions. In the Pareto case, all tests outperform the M test by a great margin and the CT test is the superior. As suggested by a reviewer, we add one more case in the simulation in which and with sample sizes . The Monte Carlo powers of the seven tests are listed in Table 5. In this scenario, the T test performs better than KS and DT, but does not compare as favorably to the CT, W, ELT and ELR tests.

Distribution KS M ELR ELT CT DT T
1 0.039 0.051 0.056 0.047 0.049 0.051 0.047
1.5 0.118 0.663 0.542 0.238 0.251 0.431 0.138
Normal 2 0.374 0.979 0.965 0.746 0.792 0.915 0.479
2.5 0.681 0.999 0.999 0.962 0.981 0.994 0.815
3 0.881 1.000 1.000 0.996 0.999 1.000 0.957
1 0.040 0.054 0.055 0.049 0.052 0.049 0.050
1.5 0.307 0.098 0.378 0.418 0.487 0.356 0.398
Pareto 2 0.741 0.165 0.828 0.857 0.909 0.815 0.831
2.5 0.937 0.214 0.973 0.980 0.992 0.974 0.978
3 0.988 0.234 0.997 0.998 0.999 0.997 0.998
Exp vs Lgnorm 0.336 0.535 0.654 0.555 0.502 0.315 0.476
Table 5: Power performance of each test with significance level for Normal and Pareto scale alternatives, also the case of F = Exp and G = Lognorm.

In general, the T test is not recommended for scale alternatives. The Kolmogorov-Smirov test is not recommended either. The empirical likelihood ELR test is more suitable for a general scale alternative, but is not recommended for a location alternative for symmetric distributions. The T test has a better performance for location alternatives than scale alternatives. It is easy to explain the power performance of the Cramér-von Mises test with the rank based formulation (1) for the location alternatives. For two samples from the same class distributions (normal distributions, t distributions or Pareto distributions and so on) but with different locations, the ranks in the mixture are quite different. Therefore the corresponding test can easily recognize them and have good power performance. We recommend to apply the T test for location alternatives, especially in the heavy-tailed distributions.

4 Multivariate Extension

The proposed rank test statistic is closely related to the two sample Cramér-von Mises criterion. Both statistics are different sample plug-in forms from a same population quantity. The advantage of our rank test is to allow straightforward generalizations to the multivariate case by using different multivariate rank functions. Among them, the spatial rank is appealing due to its computation ease, efficiency and other nice properties [23], [24]. The sample version of the spatial rank function with respect to , the empirical distribution of the combined sample and in , is defined as

where for , for and is the Euclidian distance. Then the multivariate two-sample spatial rank statistic, denoted by , is defined as

(10)

The test statistic is the difference of the average of the intra-group rank distances and the average of the inter-group rank distances. A large value of indicates the deviation of the two groups and rejects the null hypothesis. The multivariate counterpart of Theorem 2.1 states as follows.

Theorem 4.1

Let and be independent

-variate continuous random vectors distributed from

and , respectively. Let with . Then

(11)

where the equality holds if and only if .

The multivariate spatial rank test based on loses the distribution-free property under the null hypothesis. The test relies on the permutation method to determine critical values or compute p-values. But the test is robust. For example, it does not require the assumption of finite second moment as the Hotelling’s test. Neither it requires the assumption of finite first moment as the test (CT) considered by Baringhaus and Franz [3].

A simulation is conducted to compare performance of , CT and the Hotelling’s under multivariate normal, and Pareto distributions on (). Location and scatter alternatives are considered. For location alternatives in normal and distributions, the parameters of distributions for generating samples of size are and , while for samples with size are and , where and 1. For Pareto distribution, is generated with each component from Pareto(1,1) and is generated with each component from Pareto. R package “Hotelling” is used for the Hotelling’s test. and CT tests use the permutation method to compute p-values and iterations are computed to estimate powers by calculating the fraction of -values less than or equal 0.05. Results for the location alternatives are listed in Table 6.

Dist Dim Method
0.0550 0.3000 0.8688 0.9966 1
CT 0.0556 0.3090 0.8818 0.9972 1
Hotelling 0.0518 0.3226 0.8900 0.9976 1
Norm 0.0484 0.5178 0.9944 1 1
CT 0.0500 0.5332 0.9958 1 1
Hotelling 0.0494 0.5248 0.9942 1 1
0.0538 0.1574 0.4898 0.8212 0.9644
CT 0.0596 0.0820 0.226 0.4504 0.7134
Hotelling 0.0546 0.0562 0.0934 0.1360 0.2058
0.0478 0.2382 0.7986 0.9888 0.9996
CT 0.0546 0.0858 0.288 0.6200 0.8568
Hotelling 0.0472 0.0742 0.1622 0.2990 0.4608
0.0492 0.3470 0.8682 0.9886 0.9998
CT 0.0560 0.1146 0.2850 0.5330 0.7298
Hotelling 0.0484 0.0986 0.1858 0.3076 0.4188
Pareto 0.0522 0.2892 0.7942 0.9784 0.9996
CT 0.0492 0.1142 0.2942 0.5184 0.7128
Hotelling 0.0528 0.1108 0.2614 0.4462 0.6046
Table 6: Power performance of , CT and Hotelling tests with significance level for multivariate normal, and Pareto distributions with location alternatives with sample sizes .

From Table 6, three tests keep the size 5% well. Powers in are higher than that in for each of three tests under all distributions. In the normal cases, performs slightly worse than the Hotelling’s test and CT. The power of is about lower than that of the Hotelling test and lower than that of CT under when and . However, the power gain of over CT and the Hotelling’s test is huge in the -distributions. For and , is about twice powerful as CT and about triple powerful as the Hotelling test. The advantage of our proposed over CT and the Hotelling’s test are even more significant in the asymmetric Parato distributions than in the distributions for the location alternatives.

Dist Dim Method Orient
0.0468 0.0640 0.1064 0.1902 0.2992 0.3179
CT 0.0474 0.0982 0.2716 0.5660 0.8072 0.3016
Hotelling 0.048 0.0472 0.0598 0.0496 0.0524 0.0493
Norm 0.0476 0.0748 0.1272 0.2600 0.4124 0.9678
CT 0.0470 0.1510 0.5580 0.9192 0.9948 0.8188
Hotelling 0.045 0.0568 0.0526 0.0576 0.0540 0.0538
0.0486 0.0580 0.0698 0.0948 0.1256 0.2366
CT 0.0482 0.0680 0.1182 0.1754 0.2370 0.0916
Hotelling 0.0506 0.0476 0.0488 0.0530 0.0544 0.0495
0.0514 0.0648 0.0900 0.1286 0.1742 0.5896
CT 0.0512 0.0836 0.1510 0.2320 0.3344 0.1984
Hotelling 0.0528 0.0494 0.0492 0.0556 0.0550 0.0468
0.0550 0.6164 0.9802 1 1 -
CT 0.0540 0.6896 0.9896 1 1 -
Hotelling 0.0498 0.5148 0.9354 0.9876 0.9976 -
Pareto 0.0504 0.9158 1 1 1 -
CT 0.0512 0.9268 0.9996 1 1 -
Hotelling 0.0566 0.7616 0.9960 0.9998 1 -
Table 7: Power performance of , CT and Hotelling tests with significance level for multivariate normal, and Pareto distributions with Scatter alternatives with sample sizes .

Results for scatter alternatives are listed in Table 7. For multivariate normal and distributions, we first consider the difference of scatter matrix only on scales. The parameters for sample are and , while for samples are and and , where and 3. We then consider the alternative with different orientation on the scatter matrices. The scatter matrix is for samples, while it is for samples. Hence two components of are positively correlated and the two components of are negatively correlated. The results for orientation difference alternatives are listed in the last column ”Orient” of Table 7. In , has diagonal elements to be 1 and off-diagonal elements to be 0.5 and

is constructed to have the same eigenvectors as

and eigenvalues to be the reciprocals of eigenvalues of . For Pareto distributions, is generated with each component from Pareto(1,1) and is generated with each component from Pareto.

From Table 7, all tests maintain the size 5% well. For asymmetric Pareto distributions, CT is slightly better than and is better than the Hotelling’s test. For normal and distributions, the Hotelling’s completely fails in scatter alternatives since it is a test on location difference. CT test is much better than for scale alternatives. Particularly CT is triple powerful as the in normal case and twice powerful in the case. This result is not surprising since is based on the spatial ranks that lose major information on distances or scales. However, when two scatter matrices of distributions are different on orientation, performs better than CT, especially in distribution, the power of is twice or triple as that of CT.

5 Summary

The problem of testing whether two samples come from the same or different population is a classical one in statistics. In this paper, we have studied a rank-based test for the univariate two sample problem. The test statistic is the difference between the average of between-group rank distances and the average of within-group rank distances. Under the null hypothesis, it is distribution free. The limiting null distribution was explored through techniques of Hájek projection and orthogonal decomposition. It has been proved that the limiting distribution is not normal since the projection on one variable is insufficient to represent the variation of the test statistic. By taking the second-order projection, an operator in the functional space was defined and its eigenfunctions and eigenvalues were applied to derive the limiting distribution. It is a weighted mixture of independent chi-square distributions with the weights being the eigenvalues of the operator. We provided a recommendation how to use the limiting distribution to obtain critical values of the proposed test in practice.

The proposed rank test statistic is closely related to two sample Cramér-von Mises criterion. Both statistics are different sample plug-in forms from the same population quantity. We have provided a counter example to show they are different. However, they have the same expectation, variance and limiting distribution. The advantage of our rank test is to allow straightforward generalizations to the multivariate case by using different multivariate rank functions. A continuation of this work is to study properties of the multivariate Cramér-von Mises test. Also the generalizations based on other multivariate rank functions deserve further investigation.

6 Proofs

The following lemma gives the expected value of the absolute difference between the standardized ranks of and .

Lemma 6.1

Let and be independent continuous random variables from and , respectively. Let with be the mixture distribution, be the distribution of and be the distribution function of . Then

(12)

Proof of Lemma 6.1. Notice that

Since is continuous and , we have , for any , where . Then (12) holds by Fubini’s Theorem.

Proof of Theorem 2.2. Define

Conditioning on ,

, by the law of large numbers,

and

By the strong law of large numbers for -statistics [20],

and