# Optimal Sample Size Planning for the Wilcoxon-Mann-Whitney-Test

There are many different proposed procedures for sample size planning for the Wilcoxon-Mann-Whitney test at given type-I and type-II error rates α and β, respectively. Most methods assume very specific models or types of data in order to simplify calculations (for example, ordered categorical or metric data, location shift alternatives, etc.). We present a unified approach that covers metric data with and without ties, count data, ordered categorical data, and even dichotomous data. For that, we calculate the unknown theoretical quantities such as the variances under the null and relevant alternative hypothesis by considering the following synthetic data' approach. We evaluate data whose empirical distribution functions match with the theoretical distribution functions involved in the computations of the unknown theoretical quantities. Then well-known relations for the ranks of the data are used for the calculations. In addition to computing the necessary sample size N for a fixed allocation proportion t = n_1/N, where n_1 is the sample size in the first group and N = n_1 + n_2 is the total sample size, we provide an interval for the optimal allocation rate t which minimizes the total sample size N. It turns out that for certain distributions, a balanced design is optimal. We give a characterization of these distributions. Furthermore we show that the optimal choice of t depends on the ratio of the two variances which determine the variance of the Wilcoxon-Mann-Whitney statistic under the alternative. This is different from an optimal sample size allocation in case of the normal distribution model.

## Authors

• 1 publication
• 4 publications
• 3 publications
• ### Optimal sample size for the Birnbaum-Saunders distribution under a decision-theoretic approach

The Birnbaum-Saunders distribution has been widely applied in several ar...
07/24/2020 ∙ by Eliardo G. Costa, et al. ∙ 0

• ### Assurance for sample size determination in reliability demonstration testing

Manufacturers are required to demonstrate products meet reliability targ...
05/21/2019 ∙ by Kevin James Wilson, et al. ∙ 0

• ### Reducing Variance with Sample Allocation Based on Expected Response Rates

Several techniques exist to assess and reduce nonresponse bias, includin...
05/25/2020 ∙ by Blanka Szeitl, et al. ∙ 0

• ### Minimal sample size in balanced ANOVA models of crossed, nested and mixed classifications

We consider balanced one-, two- and three-way ANOVA models to test the h...
10/07/2019 ∙ by Bernhard Spangl, et al. ∙ 0

• ### Flow Rate Estimation From Probe Vehicle Data And Sample Size Requirements

The interest to use probe vehicles for traffic monitoring is growing. Th...
01/16/2020 ∙ by Mecit Cetin, et al. ∙ 0

• ### On discrimination between the Lindley and xgamma distributions

For a given data set the problem of selecting either Lindley or xgamma d...
12/25/2018 ∙ by Subhradev Sen, et al. ∙ 0

• ### Sample Design for Medicaid and Healthcare Audits

We develop several tools for the determination of sample size and design...
09/05/2018 ∙ by Michelle Norris, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## Abstract

There are many different proposed procedures for sample size planning for the Wilcoxon-Mann-Whitney test at given type-I and type-II error rates and , respectively. Most methods assume very specific models or types of data in order to simplify calculations (for example, ordered categorical or metric data, location shift alternatives, etc.). We present a unified approach that covers metric data with and without ties, count data, ordered categorical data, and even dichotomous data. For that, we calculate the unknown theoretical quantities such as the variances under the null and relevant alternative hypothesis by considering the following “synthetic data” approach. We evaluate data whose empirical distribution functions match with the theoretical distribution functions involved in the computations of the unknown theoretical quantities. Then well-known relations for the ranks of the data are used for the calculations.

In addition to computing the necessary sample size for a fixed allocation proportion , where is the sample size in the first group and is the total sample size, we provide an interval for the optimal allocation rate which minimizes the total sample size . It turns out that for certain distributions, a balanced design is optimal. We give a characterization of these distributions. Furthermore we show that the optimal choice of depends on the ratio of the two variances which determine the variance of the Wilcoxon-Mann-Whitney statistic under the alternative. This is different from an optimal sample size allocation in case of the normal distribution model.

## 1 Introduction

The comparison of two independent samples is widespread in medicine, the life sciences in general, and other fields of research. Arguably, the most popular method is the unpaired

-test for two sample comparisons. However, its application is limited. For heavy-tailed or very skewed distributions, use of the

-test is not recommended, especially for small sample sizes. For ordered categorical data, comparing averages by means of -tests is not appropriate at all. For those situations, a nonparametric test such as the Wilcoxon-Mann-Whitney test is much preferred.

In order to plan a study for such a two sample comparison, we need to know how many subjects are needed to detect a pre-specified effect at least with probability

where denotes the type-II error probability. If the underlying distributions are normal, a pre-specified effect might be formulated as a difference of means. Within a general nonparametric framework, the relative effect (see Section 2) is very often used. But for a statistics practitioner, it is sometimes difficult to state a relevant effect size to be detected in terms of the nonparametric relative effect. Therefore, we will be using a slightly different approach. Based on prior information regarding one group, for example the standard treatment or the control group, one can derive the distribution under a conjectured (relevant) alternative in cooperation with a subject-matter expert. This distribution is established in such a way that it features what the subject-matter expert would quantify as a relevant effect. In other words, the expert may, but does not necessarily have to, provide a (standardized) difference of means as a relevant nonparametric relative effect on which the Wilcoxon-Mann-Whitney effect is based. Or alternatively, the subject matter expert may simply provide information on an configuration that the expert would consider relevant in terms of providing evidence in favour of the research hypothesis. This information will then be translated into a relevant nonparametric effect. More details on deriving based on an interpretable effect in order to compute the nonparametric effect and the variances involved in the sample size planning are given in Section 4.

For the Wilcoxon-Mann-Whitney test, there already exist many sample size formulas. However, most of them require for example either continuous data as used in Bürkner et al. [5], Wang et al. [29], or Noether [18], or they require ordered categorical data as in Fan [9], Tang [27], Lachin [14], Hilton et al. [11], or Whitehead [30]. For a review of different methods, we refer to Rahardja et al. [22]. A rather well known method for sample size calculation in case of continuous data is given by Noether [18]

who approximated the variance under alternative by the variance under the null hypothesis. A similar approximation was also used by Zhao et al.

[31] who generalized Noether’s formula to allow for ties. For practical application however, this approximation may not always be appropriate because the variances under null hypothesis and under alternative can be very different, thus potentially leading to an under- or overpowered study. See, for example, Shieh et al. [26] for a comparison of Noether’s formula with different alternative methods.

In some other approaches, the sample size is only calculated under the assumption of a proportional odds model for ordered categorical data (e.g. Kolassa

[12] or Whitehead [30], or considering only location shift models for continuous metric data (see, e.g., Rosner and Glynn [23], Chakraborti et al. [6], Lesaffre et al. [16], Hamilton [10], or Collings and Hamilton [7], among others). An advantage of our Formula (9) in Section 2 for the sample size calculation is its generality and practicality. It can be used for metric data as well as for ordered categorical data, and it even works very well for dichotomous data. Furthermore, our formula does not assume any special model for the alternatives.

Within the published literature, the sample size formulas bearing most similarity to ours is those by Wang et al. [29]. However, their approach is limited to continuous distributions, whereas our approach is based on a unified approach allowing for discrete and for continuous data.

A completely different way to approach optimality of Wilcoxon-Mann-Whitney tests has been pursued by Matsouaka et al. [17]. They use a weighted sum of multiple Wilcoxon-Mann-Whitney tests and determine the optimal weight for each test. Their aim is not an optimal sample size planning including optimization of the ratio of sample sizes, but instead they try to optimally combine a primary endpoint with mortality.

In a two sample setting, we sometimes can choose the proportion of subjects in the first group. That is, we can choose where is the number of subjects in the first group and is the total number of subjects. The question that arises is how to choose in an optimal way. In Bürkner et al. [5], the optimal is chosen such that the power of the Wilcoxon-Mann-Whitney test is maximized for a given sample size . On the other hand, in practice, we prefer to choose in such a way that the total sample size is minimized for a specified power . For the two sample -test with unequal variances, Dette and O’Brien [8] showed that the optimal to maximize the power of the test is approximately

 t≈11+τ,

where

is the ratio of standard deviations of the two groups under the hypothesis and under the alternative, respectively. This means that when applying the

-test, more subjects should be allocated to the group with the higher variance. Bürkner et al. [5] showed for symmetric, continuous distributions under a location shift model, that a balanced design is optimal for the Wilcoxon-Mann-Whitney test. For general distributions, they observed in simulation studies that in many situations, the difference between using the optimal and using a balanced design is negligible.

In most publications the generation of the alternative from the reference group is not discussed and, instead the distribution under the alternative is assumed to be known. Here, we want to discuss, however, also how we can generate the distribution under the alternative based on the distribution in the reference group and an interpretable relevant effect. In order to motivate the method derived in this paper, let us consider an example with count data, as it appears that most publications on sample size planning focus on ordered categorical or continuous metric data. In Table 1, the data of an advance information on a placebo for the number of seizures in an epilepsy trial is given. We want to base the sample size planning for a new drug on the data of the advance information which comes from a study published by Leppik et al. [15], as well as Thall and Vail [28]. For these data, we cannot assume a location shift model, as an absolute reduction of two seizures would be very good for someone with three seizures, but not really helpful for someone with 20 or more seizures. More appropriate would probably be a reduction of the number of seizures by some percentage , for example . Based on this specified relevant effect , we artificially generate a new data set whose empirical distribution function is exactly equal to . Basically, the number of the artificially generated data is arbitrary (here, , e.g.) as long as . We will refer to such data as “synthetic” data.

Most of the methods mentioned before cannot be applied to data such as these as they have been derived under different, restrictive assumptions. In particular, methods assuming a location-shift model cannot be used here. However, application of the method proposed in the present paper does not require specific types of data or a specific alternative because it is based on the observed data and the generated synthetic data, which do not need to follow any particular model. See also the Chapter “Keeping Observed Data as a Theoretical Distribution” in Puntanen et al. [21] for a similar approach in the parametric case. More details regarding this data set and the sample size calculation can be found in Section 4.

The rest of this paper is now organized as follows. We first derive a general sample size formula and investigate the behavior of the optimal . That is, we show in which cases more subjects should be allocated to the first or second group. Then, we apply this method to several data examples with different types of data and provide power simulations to show that with the sample size calculated by our method, the simulated power is at least . Furthermore, we simulate how the chosen type-I and type-II error rates affect the value of the optimal allocation rate .

## 2 Sample Size Formula

Let and , , , be independent random samples obtained on different subjects, with

and are understood as their normalized versions, that is where denotes the right-continuous, and

denotes the left continuous cumulative distribution function. By using the normalized version, we can pursue a unified approach for continuous and discrete data, no separate formulas “correcting for ties” are necessary. This unified approach results naturally in the usage of midranks in the formulas for the test statistics, see Ruymgaart

[24], Akritas, Arnold and Brunner [1], and Akritas and Brunner [2] for details. With , we denote the proportion of the subjects that is allocated to the first group. That is, and . Without loss of generality, may be regarded as the reference group, and the second group as the (experimental) treatment group. The Wilcoxon-Mann-Whitney test is based on the nonparametric relative treatment effect

 p =∫F1dF2=P(X11

which can be estimated in a natural way by its empirical analog

. Here, is the normalized empirical cumulative distribution function with , and the left and right continuous empirical cumulative distribution functions for , respectively. Finally, denotes the indicator function of the set . Using the asymptotic equivalence theorem, see for example Brunner and Munzel [3] or Brunner and Puri [4], it can be shown that the statistic

 TN=√N(^p−p), (2)

is asymptotically normal under slight regularity assumptions. Let us denote by

 UN=√N(n−12n2∑j=1F1(X2j)−n−11n1∑j=1F2(X1j)+1−2p) (3)

the statistic that is an asymptotically equivalent statistic to

, but based on independent random variables. Then, under the null hypothesis

, the variance of can be written as

 σ20=N2n1n2σ2=1t(1−t)σ2, (4)

where . This means, has asymptotically the same distribution as , but the distribution of the latter is asymptotically standard normal. To compute the variance of under the alternative hypothesis, we again take advantage of this asymptotic equivalence in (3) and obtain the following asymptotic variance under alternative.

 σ2N=Nn1n2(n2σ21+n1σ22) (5)

where

 σ21 =Var(F2(X11))=∫F22dF1−(1−p)2, (6) σ22 =Var(F1(X21))=∫F21dF2−p2. (7)

Clearly, the variance under alternative is a weighted sum of two components, and . Both of these components are important for minimizing the sample size, as performed in Section 3, unlike the parametric case where only the two variances under the null and under the alternative hypotheses are considered.

Based on these considerations, an approximate sample size formula for the Wilcoxon-Mann-Whitney test can be obtained similar to the one calculated by Wang et al. [29] for continuous data. Namely, we obtain

 N=(σ0u1−α/2+σNu1−β)2(p−12)2, (8)

where and denote the type-I and type-II error rates, respectively, and is the quantile of the standard normal distribution.

The quantities , and in Equation (8) are unknown in general. Moreover, is a linear combination of the two unknown variances and in Equations (6) and (7). To compute these quantities from the distribution of the prior information in the reference group and the distribution generated by an intuitive and easy to interpret relevant effect, we proceed as follows.

We interpret the distributions of the data as fixed theoretical distributions similar to the parametric case in Seber [25] on page 433 and Puntanen et al. [21] on pages 27 and 28. Therefore, we denote the data from the prior information by and the synthetic data for the treatment group by . The corresponding cumulative distribution functions are denoted by and , respectively. Here, denotes the empirical distribution function of the available data in the reference group and the empirical distribution functions of the synthetic data in the treatment group. In this context, “synthetic” means that the data for are artificially generated based on the prior information and some interpretable relevant effect. We can generate data sets of arbitrary size for and , as long as the relative frequencies or probabilities remain unchanged. Because we assume that our synthetic data represent fixed distributions and not a sample, we can calculate the variances , , and , as well as the relative effect exactly. To emphasis that these quantities are not estimators but rather the true parameters based on the synthetic data, we will denote these quantities by , and .

By using the relations and , the sample size formula from Equation (8) is then rewritten as

 N=(σ∗u1−α/2+u1−β√tσ2∗2+(1−t)σ2∗1)2t(1−t)(p∗−12)2. (9)

The variances and the relative effect can be easily calculated by using a simple relation between ranks and the so-called placements and , which were introduced by Orban and Wolfe [20, 19]. The placements were first defined only for continuous distributions, but were later generalized to include discrete distributions, for details see, for example, Brunner and Munzel [3]. To this end, let denote the overall rank of among all synthetic data, and the ranks within the -th group, . Further, let , , denote the rank means. Then, the placements can be represented by these ranks as , . Finally, by letting , the quantities in the sample size formula (9) can be calculated directly as follows.

 p∗ =∫F∗1dF∗2=1N(¯¯¯¯R ∗2⋅−¯¯¯¯R ∗1⋅)+12, (10) σ2∗ =∫F2∗dF∗−14=1N32∑i=1ni∑k=1(R∗ik−N+12)2, (11) σ2∗1 =∫F2∗2dF∗1−(1−p∗)2=1n1n22n1∑k=1(P∗1k−¯¯¯¯P ∗1⋅)2, (12) σ2∗2 =∫F2∗1dF∗2−p∗2=1n21n2n2∑k=1(P∗2k−¯¯¯¯P ∗2⋅)2. (13)

Note that for computing the variances, we do not divide by or , but rather by or , because the distributions of the synthetic data are considered as fixed theoretical distributions similar to the parametric case in Puntanen et al. [21] (pages 27 and 28).

## 3 Minimizing N

### 3.1 Interval for the optimal design

In Section 2 we have derived a formula for the sample size given type-I and type-II error rates and , respectively. In practice, we sometimes have the opportunity to choose how many subjects should be allocated to the first group and how many to the second. The question in such a situation is how the proportion should be chosen in order to minimize . Bürkner et al. [5] aimed at finding the optimal such that the power is maximized for a given sample size . Although both questions lead to essentially the same answer, we prefer to minimize the sample size as this question arises more naturally in sample size planning.

Technically, an exact solution to this problem is possible, but it is not feasible to write down the solution in closed form anymore, and it does not give us much information about the behaviour of the solution. However, it is possible to provide an interpretable interval for the optimal allocation rate . For that, we only have to assume that the power is greater than % and we distinguish between the cases and . Note that the variances and can be quite different even if the variances of and are the same. If we allow unequal variances for and it is even possible that and occurs where is the largest possible value for the variances , .

The assumption on the minimal power could be weakened to assuming that the numerator of is not zero. One then only needs to distinguish the cases , , and . For practical considerations, however, only is of relevance, therefore we only consider this situation.

Now regarding the case , it is clear from Formula (9) that the optimal allocation rate is because the numerator of does not depend on and is maximized at . For the case we consider first . Then it is possible to show (see Appendix, Result 2) that the sample size is minimized by a with . The minimizer is unique in the interval and the bounds and are given by

 I1 =1κ+1, (14) I2 =√z√z+(u1−α/2 √qσ+u1−β σ22), (15)

where as in (4),  , and

 z=(u1−α/2 √qσ+u1−β σ21)(u1−α/2 √qσ+u1−β σ22).

 t0<12 ⟺σ1<σ2. (16)

In the case , we obtain an analogous result for the minimizer , where the bounds are the same as before. Moreover we have a similar equivalence, namely

 t0>12 ⟺σ1>σ2. (17)

The derivation of these two equivalences can be found in the Appendix in the Results 2 and 3.

From the form of the interval we can see that if then . In most cases this means that the minimum total sample size is obtained for allocation rates close to , or the allocation rate is

because of rounding. Larger values for the type-I error rate

or the power lead in general to more extreme values for , that is gets larger. This can be seen from the upper bound . By increasing or the power the bound decreases (or increases for ). Typically this means that the difference tends to get larger. Note that is bounded from below (above), that is cannot become arbitrarily small (or large). The impact of and is demonstrated in simulations in Section 5.

Next, we consider the case . In the same way as before, it is possible to construct an interval for the optimal allocation rate which is given by , where the lower bound is

 I(0)1=u1−α/2 σ2u1−α/2 σ+u1−β σ2, (18)

and the upper bound is the same as in the case . More details are given in the Appendix in Result 4. An analogous result can be obtained for .

Therefore, the value of is mainly determined by which is the ratio of the standard deviations and under the alternative hypothesis. This is qualitatively different from the result of Dette and O’Brien [8] for the -test in a parametric location-scale model, where the optimal allocation value is determined by the ratio of standard deviations under the null and under the alternative hypothesis. For the Wilcoxon-Mann-Whitney test, the variance under null hypothesis is not really important for determining , in case of continuous distributions, for example, the variance under null hypothesis is .

### 3.2 Optimality of a Balanced Design

In the previous section, we have provided ranges for the optimal allocation proportion . There are many situations, in which balanced designs are optimal or close to optimal. In this section, we will describe classes of situations in which a balanced design minimizes the sample size. From Section 3.1 we know that

 t0=12⟺σ1=σ2. (19)

The right hand side of this equivalence can be rewritten as

 t0=12⟺σ1=σ2⟺∫F21dF2=∫(1−F2)2dF1. (20)

Bürkner et al. [5] showed analytically that for symmetric and continuous distributions with and , the minimal sample size is attained at . Such distributions satisfy the integral equation

 ∫F21dF2=∫(1−F2)2dF1. (21)

But the class of distributions satisfying Equation (21) is actually larger. Consider normalized cumulative distribution functions for which an exists such that for all the following equality holds

 F1(a+x)=1−F2(a−x) . (22)

Further, let us assume . Then, the minimum for , , is attained at . This means that (22) is a sufficient but not necessary condition for . As an example for distributions that satisfy Equation (21) but not (22) consider to be a non-symmetric distribution.

Note that we do not assume for (22

) that the distributions are stochastically ordered or symmetric. If we assume finite third moments then equation (

22) only implies that both distributions have the same variance and their skewness has opposite signs, that is, if we denote with the skewness of the distribution with cdf , .

Obviously, for a large class of distributions, the optimal allocation rate is exactly . Bürkner et al. [5] already noticed the robustness of the Wilcoxon-Mann-Whitney test regarding the optimal allocation rate. When the optimal is not equal to , it is often close to . Furthermore, the exact choice of typically only has a small influence on the required total sample size. This applies not only to continuous and symmetric distributions but in general to arbitrary distributions.

## 4 Data Examples

The generality of the approach proposed in this paper is demonstrated using different data examples with continuous metric, discrete metric, and ordered categorical data. In this section, we first describe the data sets. Then, the calculated sample sizes along with the actual achieved power in comparison with other sample size calculation methods are given. For all data sets, we used the prior information from one group (e.g., from a previous study or from literature) to generate synthetic data for the second group based on an interpretable effect specified by a subject matter expert. For ordered categorical data, such an effect might be that a certain percentage of subjects in each category are moved to a better or worse category. For metric data, it is possible to simply use a location shift as the effect of interest. Regardless on how the effects are chosen, in the end, they all are translated into the so-called nonparametric relative effect which itself provides for another interpretable effect quantification which might be useful for practitioners, in addtion to, for example, a location shift effect.

For all examples, we used as the type-I error rate and provide the output from an R function which shows the optimal , the sample size determined for each group, and the ratio . Furthermore, we provide simulation results to assess the actual achieved power. The R Code is given in the appendix. For calculating the asymptotic Wilcoxon-Mann-Whitney test, we used the function rank.two.samples from the R package rankFD [13]. For all simulations performed with the statistical software R, we generated data sets and used as our starting seed value for drawing data sets from the synthetic data. To compute the optimal allocation rate and the sample sizes for each group, the function WMWssp_Minimize from the R package rankFD [13] can be used.

### 4.1 Number of Seizures in an Epilepsy Trial

The data for the placebo group of a clinical trial published in Thall and Vail [28] and Leppik et al. [15] is shown in Table 1. As mentioned in the Introduction, a relevant effect for a drug may be stated as a reduction of the number of seizures by . A location-shift model is clearly not appropriate for these data. Based on the specified relevant effect size, we can generate synthetic data. They lead to a nonparametric relative effect of approximately which is inserted into the sample size formula.

In order to have a power of at least , we need subjects in each group, according to our method. When using the optimal , we need and subjects. In this case, the optimal allocation only reduces the total number of subjects needed by one, in comparison with a balanced design. Applying Noether’s formula in this case yields sample sizes . Table 2 presents results from a power simulation regarding the different sample size recommendations. Here, Noether’s formula would lead to a slightly overpowered study.

### 4.2 Irritation of the Nasal Mucosa

In this study, two inhalable substances with different concentrations are compared with regard to the severity of the nasal mucosa damage of rats (see Akritas, Arnold and Brunner [1]). The severity of irritation is described using a defect score from to where refers to no irritation and to severe irritation. For the nasal mucosa data, we have prior information for substance 1 with 2 ppm concentration. A pathologist suggests, for example, that a worsening of one score unit for of the rats in categories 0, 1, and 2 is a relevant effect. This means that of the rats with score will be assigned score  and so forth. The resulting synthetic data set for substance 2 is given in Table 3. The original data set for substance 1 has been augmented by factor  in order to obtain integer values of the samples sizes for the synthetic data for substance 2. The result of the sample size calculation is not affected by this because the relative frequencies for substance 1 remain unchanged.

Based on the synthetic data in Table 3, the relative effect is . Performing a sample size calculation with and balanced groups results in sample sizes . For this data set, the ratio of variances is larger than , therefore it is beneficial to assign fewer subjects to the first group (substance 1). To be more precise, the optimal allocation rate is approximately which leads to sample sizes and . But as we can see, in both cases the total sample size is . If we apply Noether’s formula [18], we arrive at which is considerably larger than the estimated minimal sample size based on our method and leads to a remarkably overpowered study, with actual power of over (see Table 4 for the simulation results). This is mainly due to ties in the data. Recall that Noether’s formula was derived for continuous distributions. Our method achieves power for the balanced and unbalanced design. Tang [27] derived a sample size formula for ordered categorical data. If we use his method, we obtain that 86 rats per group are needed. The closeness of his result to ours may be taken as confirmation that our unified approach produces appropriate results also in the case of ordered categorical data.

### 4.3 Kidney Weights

In this placebo-controlled toxicity trial, female and male Wistar rats have been given a drug in four different dose levels. The primary outcome is the relative kidney weight in [‰], that is the sum of the two kidney weights divided by the total body weight, and multiplied by 1,000. For calculating the sample size we consider only male rats from the placebo group and generate a suitable data set exhibiting a relevant effect for the treatment group. For generating the synthetic data of the treatment group, an expert considers a location shift of of the mean from the placebo group as a relevant effect. The data are displayed in Table 5.

Using the data from Table 5 as our synthetic data, the nonparametric relative effect is calculated as . Thus, we need Wistar rats to have a power of at least . In this example, there is again barely any difference between using the optimal design (, ) and a balanced allocation. Because of rounding, in this case the optimal design even leads to a larger sample size in comparison to obtained using a balanced design. Noether’s formula leads to sample sizes in this case. The simulated power is given in Table 6. Clearly, Noether’s formula again exceeds the power. Our method maintains the power quite well and leads to just a slight inflation of power in the unbalanced design.

### 4.4 Albumin in Urine

This data set was considered by Lachin [14] and contains albumin levels in the urine (albuminuria) of diabetic patients. The levels of albumin are rated as either normal, microalbuminuria, or macroalbuminuria. The goal of the study was to compare two treatments, with expected conditional probabilities as given in Table 7.

For power, Lachin [14] reports a required sample size of ( because of rounding to achieve balanced sample sizes). Using our proposed method, we obtain a necessary total sample size of in the balanced case. For the optimal design, we obtain with an optimal allocation rate around . Simply using the Noether formula despite the ties, one would calculate a required sample size of (!), clearly leading to a much overpowered study. The other three methods attain the nominal power based on a simulation study. The relative effect for this data set is .

In the above four data examples, we have used and or for the sample size calculation and power simulation. According to Formula (9) and the intervals for (Equations (14) and (15) in Section 3.1), the choice of and has an influence not only on the total sample size , but also on the optimal allocation rate . In order to study the behaviour of these two parameters, we have performed two simulation studies which are described in Section 5.

## 5 Simulations for the Optimal Design

In this section, we assess in different simulations the behaviour of the optimal allocation rate when changing the nominal type-I error rate , the power , and the ratio of standard deviations .

For simulating the influence of , we used and distributed random numbers in the first and second group. For each , we generated random numbers for each group and calculated the optimal allocation rate and the total sample sizes and (corresponding to a balanced design) to achieve at least power. From the formula for the upper bound of we already saw (Section 3.1) that larger values for the type-I error rate would lead to a larger difference . While we cannot conclude from this directly that will be more extreme, the optimal allocation rate will more likely tend to more extreme values, that is, the difference tends to become larger. We can see this behaviour confirmed in Table 9. In this simulation, we had and , implying . In the data examples, we already found very little difference between using a balanced design or the optimal design. The simulation study yielded a similar observation (see Table 9).

In a second simulation, we investigated the behaviour of for increasing power (or decreasing ). We used and the same distributions as before. Therefore, and were the same as above. As power, we chose and generated random numbers for each to calculate the optimal allocation rate . The results are displayed in Table 10. A larger power led to more extreme values for , but the difference in required sample sizes between the balanced and optimal design was again negligible.

## 6 Discussion

In this paper, we propose a unified approach to sample size determination for the Wilcoxon-Mann-Whitney two sample rank sum test. Our approach does not assume any specific type of data or a specific alternative hypothesis. In particular, data distributions may be discrete, or continuous. Based on the general formula, we have also derived an optimal allocation rate to both groups, that is, such that is minimized. The value of this optimal allocation rate mainly depends on the ratio (see (12) and (13) for a definition of these variances) and on . The variance under the null hypothesis has no influence on . For we have , for we have , and for we have exactly assuming . The nominal type-I error rate only has a small impact on the value of . The larger is, the larger is the difference .

We can see from the interval for the optimal allocation rate derived in Section 3.1 that will typically be close to . This was also confirmed in some illustrative data examples in Section 4. Furthermore, the difference in required sample size between using a balanced design and using the optimal allocation design appears practically negligible. In other words, in most cases, a balanced design can be recommended for the Wilcoxon-Mann-Whitney test. In extensive simulations, we have confirmed that the new procedure actually meets the power at the calculated sample sizes quite well. The new procedure has been implemented into the R package WMWssp and will also be available through the package rankFD [13].

## 7 Acknowledgements

The research was supported by Austrian Science Fund (FWF) I 2697-N31.

## References

• [1] Michael G Akritas, Steven F Arnold, and Edgar Brunner. Nonparametric hypotheses and rank statistics for unbalanced factorial designs. Journal of the American Statistical Association, 92(437):258–265, 1997.
• [2] Michael G Akritas and Edgar Brunner. A unified approach to rank tests for mixed models. Journal of Statistical Planning and Inference, 61(2):249–277, 1997.
• [3] Edgar Brunner and Ullrich Munzel. The nonparametric behrens-fisher problem: asymptotic theory and a small-sample approximation. Biometrical Journal, 42(1):17–25, 2000.
• [4] Edgar Brunner and Madan L Puri. Nonparametric methods in factorial designs. Statistical Papers, 42(1):1–52, 2001.
• [5] Paul-Christian Bürkner, Philipp Doebler, and Heinz Holling. Optimal design of the wilcoxon–mann–whitney-test. Biometrical Journal, 59(1):25–40, 2017.
• [6] Subhabrata Chakraborti, B Hong, and Mark A van de Wiel. A note on sample size determination for a nonparametric test of location. Technometrics, 48(1):88–94, 2006.
• [7] Bruce Jay Collings and Martin A Hamilton. Estimating the power of the two-sample wilcoxon test for location shift. Biometrics, pages 847–860, 1988.
• [8] Holger Dette and Timothy E O’Brien. Efficient experimental design for the behrens-fisher problem with application to bioassay. The American Statistician, 58(2):138–143, 2004.
• [9] Chunpeng Fan and Donghui Zhang.

A note on power and sample size calculations for the kruskal–wallis test for ordered categorical data.

Journal of Biopharmaceutical Statistics, 22(6):1162–1173, 2012.
• [10] Martin A Hamilton and Bruce Jay Collings. Determining the appropriate sample size for nonparametric tests for location shift. Technometrics, 33(3):327–337, 1991.
• [11] Joan F Hilton and Cyrus R Mehta. Power and sample size calculations for exact conditional tests with ordered categorical data. Biometrics, 49(2):609–616, 1993.
• [12] John E Kolassa. A comparison of size and power calculations for the wilcoxon statistic for ordered categorical data. Statistics in Medicine, 14(14):1577–1581, 1995.
• [13] Frank Konietschke, Sarah Friedrich, Edgar Brunner, and Markus Pauly. rankFD: Rank-Based Tests for General Factorial Designs, 2016. R package version 0.0.1.
• [14] John M Lachin. Power and sample size evaluation for the cochran–mantel–haenszel mean score (wilcoxon rank sum) test and the cochran–armitage test for trend. Statistics in Medicine, 30(25):3057–3066, 2011.
• [15] Ilo E Leppik, Fritz E Dreifuss, Terri Bowman, Nancy Santilli, Margaret Jacobs, Coral Crosby, James Cloyd, Judy Stockman, Nina Graves, Tom Sutula, et al. A double-blind crossover evaluation of progabide in partial seizures. Neurology, 35(4):285, 1985.
• [16] Emmanuel Lesaffre, Ilse Scheys, Jürgen Fröhlich, and Erich Bluhmki. Calculation of power and sample size with bounded outcome scores. Statistics in Medicine, 12(11):1063–1078, 1993.
• [17] Roland A Matsouaka, Aneesh B Singhal, and Rebecca A Betensky. An optimal wilcoxon–mann–whitney test of mortality and a continuous outcome. Statistical Methods in Medical Research, 0(0):0962280216680524, 2016. PMID: 27920364.
• [18] Gottfried E Noether. Sample size determination for some common nonparametric tests. Journal of the American Statistical Association, 82(398):645–647, 1987.
• [19] John Orban and Douglas A Wolfe. Distribution-free partially sequential piacment procedures. Communications in Statistics-Theory and Methods, 9(9):883–904, 1980.
• [20] John Orban and Douglas A Wolfe. A class of distribution-free two-sample tests based on placements. Journal of the American Statistical Association, 77(379):666–672, 1982.
• [21] Simo Puntanen, George PH Styan, and Jarkko Isotalo. Matrix tricks for linear statistical models: our personal top twenty. Springer Science & Business Media, 2011.
• [22] Dewi Rahardja, Yan D Zhao, and Yongming Qu. Sample size determinations for the wilcoxon–mann–whitney test: A comprehensive review. Statistics in Biopharmaceutical Research, 1(3):317–322, 2009.
• [23] B Rosner and RJ Glynn. Power and sample size estimation for the wilcoxon rank sum test with application to comparisons of c statistics from alternative prediction models. Biometrics, 65(1):188–197, 2009.
• [24] Frits H Ruymgaart. A unified approach to the asymptotic distribution theory of certain midrank statistics. In Statistique non Parametrique Asymptotique, pages 1–18. Springer, 1980.
• [25] George AF Seber. A Matrix Handbook for Statisticians. John Wiley & Sons, 2008.
• [26] Gwowen Shieh, Show-li Jan, and Ronald H Randles. On power and sample size determinations for the wilcoxon–mann–whitney test. Journal of Nonparametric Statistics, 18(1):33–43, 2006.
• [27] Yongqiang Tang. Size and power estimation for the wilcoxon–mann–whitney test for ordered categorical data. Statistics in Medicine, 30(29):3461–3470, 2011.
• [28] Peter F Thall and Stephen C Vail. Some covariance models for longitudinal count data with overdispersion. Biometrics, pages 657–671, 1990.
• [29] Hansheng Wang, Bin Chen, and Shein-Chung Chow. Sample size determination based on rank tests in clinical trials. Journal of Biopharmaceutical Statistics, 13(4):735–751, 2003.
• [30] John Whitehead. Sample size calculations for ordered categorical data. Statistics in Medicine, 12(24):2257–2271, 1993.
• [31] Yan D Zhao, Dewi Rahardja, and Yongming Qu. Sample size calculation for the wilcoxon–mann–whitney test adjusting for ties. Statistics in Medicine, 27(3):462–468, 2008.

## Appendix A R Code

### a.1 Power Simulation

x1 # vector of synthetic data of first group
x2 # vector of synthetic data of second group

R 

### a.2 Minimize t

x1 # vector of synthetic data of first group
x2 # vector of synthetic data of second group
alpha = 0.05
beta=0.8
m1 <- length(x1)
m2 <- length(x2)

# ranks among union of samples:
R <- rank(c(x1,x2), ties.method="average")
R1 <- R[1:m1]
R2 <- R[m1+(1:m2)]

# ranks within samples:
R11 <- rank(x1, ties.method="average")
R22 <- rank(x2, ties.method="average")

# placements:
P1 <- R1 - R11
P2 <- R2 - R22

# effect size:
pStar <- (mean(R2)-mean(R1)) / (m1+m2) + 0.5

# variances:
sigmaStar <- sqrt(sum((R11-((m1+1)/2))^2) / m1^3)
sigma1Star <- sqrt(sum((P1-mean(P1))^2) / (m1*m2^2))
sigma2Star <- sqrt(sum((P2-mean(P2))^2) / (m1^2*m2))

sigmaStar <- sqrt(sum( (R- (m1+m2+1)/2)^2  )/(m1+m2)^3)

ss = function(t){
return((sigmaStar*qnorm(1-alpha/2) + qnorm(beta)*sqrt(t*sigma2Star^2 +
(1-t)*sigma1Star^2))^2 / (t*(1-t)*(pStar-0.5)^2))
}

# sample size with balanced groups
ss(1/2)

# optimal t
optimize(ss,interval=c(0,1), maximum=FALSE,tol = .Machine$double.eps)$minimum

# sample size given optimal t
optimize(ss,interval=c(0,1), maximum=FALSE,tol = .Machine$double.eps)$objective

`

## Appendix B Derivation of the Results

### b.1 Interval for the Optimal Design

###### Result 1.

If we assume and then the optimal design is given by . It is not necessary to assume but it is convenient to do so in order to avoid a situation where for all .

###### Proof.

The numerator of does not depend on in this case, therefore is minimized by . ∎

###### Result 2.

For and the sample size is minimized by with . The minimizer is unique in the interval . The bounds and are given by

 I1 =1κ+1, (23) I2 =√z√z+(u1−α/2√qσ+u1−βσ22), (24)

with , and . Additionally the following equivalence holds

 t0<12⟺σ1<σ2. (25)
###### Proof.

First we calculate the derivative of which is given by

 ddtN(t)=(u1−α/2σ+uβ√σ21(1−t)+σ22t)g(t)f(t), (26)

where the functions and are defined by

 g(t) =u1−α/2σ(2t−1)√σ21(1−t)+σ22t−uβ(σ21(1−t)2−σ22t2), f(t) =(p−12)2(1−t)2t2√σ21(1−t)+σ22t2.

Only has a root in . Therefore, we only need to consider this function for finding the optimal . To prove the equivalence we start with . In this case, . Because it follows that . The other direction can be proved in a similar manner.

Now that we know we can easily construct an interval for . A lower bound is given by . For the upper bound we use the monotonic function

 h(t)=u1−α/2σ(2t−1)√q−uβ(σ21(1−t)2−σ22t2). (27)

This function satisfies for all and it has exactly one root in . From this it immediately follows that .

For the uniqueness in , consider a second solution . It follows immediately that and consequently . But is strictly monotone in , therefore both roots are equal. ∎

###### Result 3.

For and the sample size is minimized by with . The minimizer is unique in the interval . The bounds are the same as in the previous theorem. Additionally the following equivalence holds

 t0>12⟺σ1>σ2. (28)
###### Proof.

Similar proof as in the case . ∎

###### Result 4.

For the case , we cannot apply the result from before. But using a similar idea we can find a lower bound for the function which is defined by

 l(t) =u1−α/2σ(2t−1)σ2t+uβσ22t2 (29)

and this function only has one root in , namely

 I(0)1 =u1−α/2 σ2u1−α/2 σ+u1−β σ2 = 12+γ, (30)

where . Then an interval for the optimal design is given by .

### b.2 Optimality of a Balanced Design

From the construction of an interval for it is clear that if and only if . The equality of variances simply means

 ∫F22dF1−(∫F2dF1)2=∫F21dF2−(∫F1dF2)2. (31)

From that we can easily conclude the equivalence

 t0=12⟺∫F21dF2=∫(1−F2)2dF1. (32)
###### Result 5.

Let us now consider normalized cumulative distribution functions for which an exists such that for all Equation (22) holds, that is,

 F1(a+x)=1−F2(a−x). (33)

Then the optimal design is given by . Furthermore if such an exists and the expectations of the two distributions are finite, then the constant can be explicitly calculated as

 a=12(∫xdF1(x)+∫xdF2(x)), (34)

that is, is the average of the expected values. If the third moments are finite, then it follows from (22) that the variances of the distributions and are equal and their skewness have opposite sign. In the case , the assumption (22) simply means that is a symmetric distribution.

###### Proof.

This equivalence holds since and satisfy . Equation (34) follows directly after some calculations by first considering and to be either continuous or discrete. Then (34) also holds for distributions with a continuous and discrete proportion. First we proof (34) for the discrete case. Note that from (22) we can conclude that holds. Then for discrete and the result follows from

 EX1 =∑ixiP(X1=xi) =−∑i(2a−xi)P(X2=2a−xi)+2a = −EX2+2a .

The derivation for the continuous case is similar. ∎