Should we really use post-hoc tests based on mean-ranks?

05/09/2015 ∙ by Alessio Benavoli, et al. ∙ IDSIA 0

The statistical comparison of multiple algorithms over multiple data sets is fundamental in machine learning. This is typically carried out by the Friedman test. When the Friedman test rejects the null hypothesis, multiple comparisons are carried out to establish which are the significant differences among algorithms. The multiple comparisons are usually performed using the mean-ranks test. The aim of this technical note is to discuss the inconsistencies of the mean-ranks post-hoc test with the goal of discouraging its use in machine learning as well as in medicine, psychology, etc.. We show that the outcome of the mean-ranks test depends on the pool of algorithms originally included in the experiment. In other words, the outcome of the comparison between algorithms A and B depends also on the performance of the other algorithms included in the original experiment. This can lead to paradoxical situations. For instance the difference between A and B could be declared significant if the pool comprises algorithms C, D, E and not significant if the pool comprises algorithms F, G, H. To overcome these issues, we suggest instead to perform the multiple comparison using a test whose outcome only depends on the two algorithms being compared, such as the sign-test or the Wilcoxon signed-rank test.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The statistical comparison of multiple algorithms over multiple data sets is fundamental in machine learning; it is typically carried out by means of a statistical test. The recommended approach is the Friedman test (demvsar2006statistical). Being non-parametric, it does not require commensurability of the measures across different data sets, it does not assume normality of the sample means and it is robust

to outliers.

When the Friedman test rejects the null hypothesis of no difference among the algorithms, post-hoc analysis is carried out to assess which differences are significant. A series of pairwise comparison is performed adjusting the significance level via Bonferroni correction or other more powerful approaches (demvsar2006statistical; garcia2008extension)

to control the family-wise Type I error.

The mean-ranks post-hoc test (McDonald1967; nemeneyi1963), is recommended as pairwise test for multiple comparisons in most books of nonparametric statistics: see for instance (gibbons2011nonparametric, Sec. 12.2.1), (kvam2007nonparametric, Sec. 8.2) and (sheskin2003handbook, Sec. 25.2). It is also commonly used in machine learning (demvsar2006statistical; garcia2008extension). The mean-ranks test is based on the statistic:

where are the mean ranks (as computed by the Friedman test) of algorithms A and B, is the number of algorithms to be compared and the number of datasets. The mean-ranks are computed considering the performance of all the algorithms. Thus the outcome of the comparison between and depends also on the performance of the other (m-2) algorithms included in the original experiment. This can lead to paradoxical situations. For instance the difference between and could be declared significant if the pool comprises algorithms and not significant if the pool comprises algorithms . The performance of the remaining algorithms should instead be irrelevant when comparing algorithms and . This problem has been pointed out several times in the past (miller1966simultaneous; gabriel1969simultaneous; Fligner1984) and also in (hollander2013nonparametric, Sec. 7.3). Yet it is ignored by most literature on nonparametric statistics. However this issue should not be ignored, as it can increase the type I error when comparing two equivalent algorithms and conversely decrease the power when comparing algorithms whose performance is truly different. In this technical note, all these inconsistencies of the mean-ranks test will be discussed in details and illustrated by means of highlighting examples with the goal of discouraging its use in machine learning as well as in medicine, psychology, etc..

To avoid theses issues, we instead recommend to perform the pairwise comparisons of the post-hoc analysis using the Wilcoxon signed-rank test or the sign test. The decisions of such tests do not depend on the pool of algorithms included in the initial experiment. It is understood that, regardless the specific test adopted for the pairwise comparisons, it is necessary to control the family-wise type I error. This can be obtained through Bonferroni correction or through more powerful approaches (demvsar2006statistical; garcia2008extension).

Even better would be the adoption of the Bayesian methods for hypothesis testing. They overcome the many drawbacks (demvsar2008appropriateness; goodman1999toward; kruschke2010bayesian) of the null-hypothesis significance tests. For instance, Bayesian counterparts of the Wilcoxon and of the sign test have been presented in (benavoli2014a; IDP); a Bayesian approach for comparing cross-validated algorithms on multiple data sets is discussed by (ML).

2 Friedman test

The performance of multiple algorithms tested on multiple datasets can be organized in a matrix:

(1)

where denotes the performance of the -th algorithm on the -th dataset (for and ). The observations (performances) in different columns are assumed to be independent. The algorithms are ranked column-by-column and each entry is replaced by its rank relative to the other observations in the -th column:

(2)

where is the rank of the algorithm in the -th dataset. The sum of the -th row , , depends on how the -th algorithm performs w.r.t. the other algorithms. Under the null hypothesis of the Friedman test (no difference between the algorithms) the average value of is . The statistic of the Friedman test is

(3)

which under the null hypothesis has a chi-squared distribution with

degrees of freedom. For , the Friedman test corresponds to the sign test.

3 Mean ranks post-hoc test

If the Friedman test rejects the null hypothesis one has to establish which are the significant differences among the algorithms. If all classifiers are compared to each other, one has to perform

pairwise comparisons.

When performing multiple comparisons, one has to control the family-wise error rate, namely the probability of at least one erroneous rejection of the null hypothesis among the

pairwise comparisons. In the following example we control the family-wise error (FWER) rate through the Bonferroni correction, even though more powerful techniques are also available (demvsar2006statistical; garcia2008extension). However our discussion of the shortcomings of the mean-ranks test is valid regardless the specific approach adopted to control the FWER.

The mean-rank test claims that the -th and the -th algorithm are significantly different if:

(4)

where is the mean rank of the -th algorithm and is the Bonferroni corrected

upper standard normal quantile

(gibbons2011nonparametric, Sec. 12.2.1). Equation (4) is based on the large sample () approximation of the distribution of the statistic. The actual distribution of the statistic is derived assuming all the ranks in (2

) to be equally probable. Under this assumption the variance of

is , which originates the term under the square root in (4).

The sampling distribution of the statistic assumes all ranks configurations in (2) to be equally probable. Yet this assumption is not tenable: the post-hoc analysis is performed because the null hypothesis of the Friedman test has been rejected.

4 Inconsistencies of the mean-ranks test

We illustrate the inconsistencies the mean-ranks test by presenting three examples. All examples refer to the analysis of the accuracy of different classifiers on multiple data sets. We show that the outcome of the test depends both on the actual difference of accuracy between algorithm A and B and on the accuracy of the remaining algorithms.

4.1 Example 1: artificially increasing power

Assume we have tested five algorithms on 20 datasets obtaining the accuracies:

The corresponding ranks are:

where better algorithms are given higher ranks. We aim at comparing and . Algorithm is better than in the first ten datasets, while is better than

in the remaining ten. The two algorithms have the same mean performance and their differences are symmetrically distributed. Each algorithms wins on half the data sets. Different types of two-sided tests (t-test, Wilcoxon signed-rank test, sign-test) return the same

-value, . The mean-ranks test correspond in this case to the sign-test and thus also its p-value is 1. This is most extreme result in favor of the null hypothesis.

Now assume that we compare together with . In the first ten datasets, algorithm is worse than , which in turn are worse than . In the remaining ten datasets, is worse than , which in turn are worse than . The -value of the Friedman test is and, thus, it rejects the null hypothesis. We can thus perform the post-hoc test (4) with (the Bonferroni corrected upper standard normal quantile for and ). The significance level has been adjusted to , since we are performing two-sided comparisons. The mean ranks of are respectively and and, thus, since and we can reject the null hypothesis. The result of the post-hoc test is that the algorithms have significantly different performance.

The decisions of the mean-ranks test are not consistent:

  • if it compares alone, it does not reject the null hypothesis;

  • if it compares together with , it rejects the null hypothesis concluding that have significantly different performance.

The presence of artificially introduces a difference between by changing the mean ranks of . For instance, and rank always better than , while they never outperform when it works well (i.e., datasets from one to ten); in a real case study, a similar result would probably indicate that while is well suited for the first ten datasets, and are better suited for the last ten. The difference (in rank) between and is artificially amplified by the presence of and only when is better than . The point is that a large differences in the global ranks of two classifiers does not necessarily correspond to large differences in their accuracies (and viceversa, as we will see in the next example).

This issue can happen in practice.111We thank the anonymous reviewer for suggesting this example. Assume that a researcher presents a new algorithm and some of its weaker variations , ,…, and compares the new algorithms with an existing algorithm . When is better, the rank is . When is better, the rank is . Therefore, the presence of , ,…, artificially increases the difference between and .

4.2 Example 2: low power due to the remaining algorithms

Assume the performance of algorithms and

on different data sets to be normally distributed as follows:

The pool of algorithms comprises also , whose performance is distributed as follows:

A collection of data sets is considered.

For the sake of simplicity, assume we want to compare only and . There is thus no need of correction for multiple comparisons.

When comparing and , the power of the two-sided sign test with is very high: (we have evaluated the power numerically by Monte Carlo simulation). The power of the mean-ranks test is instead only . We can explain the large difference of power as follows. The sign test (under normal approximation of the distribution of the statistic) claims significance when:

while the mean-ranks test (4) claims significance when:

with . Since the algorithms have mean performances that are much larger than those of , the mean-ranks difference

is equal for the two test. However the mean-ranks estimates the variance of the statistic

to be five times larger compared to the sign test. The critical value of the mean-ranks test is inflated by , largely decreasing the power of the test. In fact for the mean-ranks test the variance of increases with the number of algorithms included in the initial experiment.

4.3 Example 3: real classifiers on UCI data sets

Finally, we compare the accuracies of seven classifiers on 54 datasets. The classifiers are: J48 decision tree (

); hidden naive Bayes (

); averaged one-dependence estimator (AODE) (); naive-Bayes (); J48 graft (), locally weighted naive-Bayes (

), random forest (

). The whole set of results is given in Appendix. Each classifier has been assessed via 10 runs of 10-folds cross-validation. We performed all the experiments using WEKA.222http://www.cs.waikato.ac.nz/ml/weka/ All these classifiers are described in (witten2005data).

The accuracies are reported in Table 2. Assume that our aim is to compare alone. Therefore, we consider just the first 4 columns in Table 2. The mean ranks are:

The Friedman test rejects the null hypothesis. The pairwise comparisons for the pair gives the statistic

Since is greater than (the Bonferroni corrected upper standard normal quantile for and ), the mean-ranks procedure finds the algorithms to be significantly different.

If we compare together with , the mean ranks are:

Again, Friedman test rejects the null hypothesis. The pairwise comparisons for the pair gives the statistic

which is smaller than . Thus the difference between algorithms and is not significant.

The accuracies of and are the same in the two cases but again the decisions of the mean-ranks are conditional to the group of classifiers we are considering.

Consider building a set of four classifiers . By differently choosing and we can build ten different such sets. For each subset we run the mean-ranks test to check whether the difference between and is significantly different. The difference is claimed to be significant in cases and not significant in cases.

Now consider a set of five classifiers . By differently choosing , and we can build ten different such sets. This yields 10 further cases in which we compare again and . Their difference is claimed to be significant in 9/10 cases.

Table 1 reports the pairwise comparisons for which the statistical decision changes with the pool of classifiers that are considered. The outcome of the mean-ranks test when comparing the same pair of classifiers clearly depends on the pool of alternative classifiers which is assumed.

Card=2 Card=3 Card=4
vs. 7/10 9/10 3/5
vs. 1/10 - -
vs. 2/10 - -
vs. 9/10 5/10 -
Table 1: Pairwise comparisons that are affected (numbers of decisions that are significantly different/number of subsets) by the performance of the other algorithms. Here Card=2 means that, for each pair on the left column, we are considering the subsets , Card=3 and Card=4 . The symbol “-” means that the comparison does not depend on the subset of algorithms.

4.4 Maximum type I error

A further drawback of the mean-ranks test which has not been discussed in the previous examples is that it cannot control the maximum type I error, that is, the probability of falsely declaring any pair of algorithms to be different regardless of the other algorithms. If the accuracies of all algorithms but one are equal, it does not guarantee the family-wise Type I error to be smaller than when comparing the equivalent algorithms. We point the reader to (Fligner1984) for a detailed discussion on this aspect.

5 A suggested procedure

Given the above issues, we recommend to avoid the mean-ranks test for the post-hoc analysis. One should instead perform the multiple comparison using tests whose decision depend only on the two algorithms being compared, such as the sign test or the Wilcoxon signed-rank test. The sign test is more robust, as it only assumes the observations to be identically distributed. Its drawback is low power. The Wilcoxon signed-rank test is more powerful and thus it is generally recommended (demvsar2006statistical). Compared to the sign test, the Wilcoxon signed-rank test makes the additional assumption of a symmetric distribution of the differences between the two algorithms being compared. The decision between sign test and signed-rank test thus depends on whether the symmetry assumption is tenable on to the analyzed data.

Regardless the adopted test, the multiple comparisons should be performed adjusting the significance level to control the family-wise Type-I error. This can be done using the correction for multiple comparison discussed by (demvsar2006statistical; garcia2008extension). If we adopt the Wilcoxon signed-rank test in Example 3 for comparing , we obtain the -value , independently from the performance of the other algorithms. Thus, for any pool of algorithms , we always report the same decision: are significantly different because the -value is less than the Bonferroni corrected significance level (in the case , ).

6 Software

The MATLAB scripts of the above examples can be downloaded from ipg.idsia.ch/software/meanRanks/matlab.zip

7 Conclusions

The mean-ranks post-hoc test is widely used test for multiple pairwise comparison. We discuss a number of drawbacks of this test, which we recommend to avoid. We instead recommend to adopt the sign-test or the Wilcoxon signed-rank, whose decision does not depend on the pool of classifiers included in the original experiment.

We moreover bring to the attention of the reader the Bayesian counterparts of these tests, which overcome the many drawbacks (kruschke2010bayesian, Chap.11) of null-hypothesis significance testing.

References

Table of accuracies used in example 3

accuracy.csv

Table 2: Accuracy of classifiers on different data sets.