Finite-sample properties of robust location and scale estimators

08/01/2019 ∙ by Chanseok Park, et al. ∙ 0

When the experimental data set is contaminated, we usually employ robust alternatives to common location and scale estimators, such as the sample median and Hodges Lehmann estimators for location and the sample median absolute deviation and Shamos estimators for scale. It is well known that these estimators have high positive asymptotic breakdown points and are normally consistent as the sample size tends to infinity. To our knowledge, the finite-sample properties of these estimators depending on the sample size have not well been studied in the literature. In this paper, we fill this gap by providing their closed-form finite-sample breakdown points and calculating the unbiasing factors and relative efficiencies of the robust estimators through the extensive Monte Carlo simulations up to the sample size 100. The numerical study shows that the unbiasing factor improves the finite-sample performance significantly. In addition, we also provide the predicted values for the unbiasing factors which are obtained by using the least squares method which can be used for the case of sample size more than 100.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Estimation of the location and scale parameters of a distribution, such as the mean and standard deviation of a normal population, is a common and important problem in the various branches of engineering including: biomedical, chemical, materials, mechanical and industrial engineering. The quality of the data plays an important role in estimating these parameters, whereas in the engineering sciences, the experimental data is often contaminated due to the measurement errors, the volatile operating conditions, etc. Thus, robust estimations are advocated as alternatives to commonly used location and scale estimators (e.g., the sample mean and sample standard deviation) for estimating the parameters of population. For example, for the case where some of the observations are contaminated by outliers, we usually adopt the sample median and Hodges-Lehmann

(Hodges and Lehmann, 1963) estimators for the location parameter and the sample median absolute deviation (Hampel, 1974) and Shamos (Shamos, 1976) estimators for the scale parameter, because these estimators have a large breakdown point and thus perform well both in the presence and absence of outliers.

The breakdown point is a common criterion for measuring the robustness of an estimator. The larger the breakdown point of an estimator, the more robust it is. The finite-sample breakdown point (Donoho and Huber, 1983) is defined as the maximum proportion of incorrect or arbitrarily observations that an estimator can deal with without making an egregiously incorrect value. For example, the breakdown points of the sample mean and the sample median are 0 and , respectively. In general, the breakdown point can be written as a function of the sample size. In this paper, we provide the finite-sample breakdown points for the various location and scale estimators mentioned above. We show that when the sample sizes are small, they are noticeably different from the asymptotic breakdown point, which is the limit of the finite sample breakdown point as the sample size approaches infinity.

It deserves mentioning that for robust scale estimation, the MAD and the Shamos estimators not only have positive asymptotic breakdown points, but also are normally consistent as the sample size goes to infinity. However, when the sample size is small, they have serious biases and provide inappropriate estimation of the scale parameter. Some bias-correction techniques are commonly adopted to improve the finite sample performance of these estimators. For instance, Williams (2011) studied the finite sample correction factors through computer simulations for several simple robust estimators of the standard deviation of a normal population, which include the MAD, interquartile range, shortest half interval, and median moving range, Later on, Hayes (2014) obtained the finite-sample bias-correction factors for the MAD for the scale parameter. They have shown that finite sample correction factors can significantly eliminate systematic biases of these robust estimators, especially when the sample sizes are small.

To our knowledge, finite-sample properties of the sample median absolute deviation (MAD) and Shamos estimators have received little attention in the literature except for some references covering topic for small sample sizes. This observation motivates us to employ the extensive Monte Carlo simulation to obtain the empirical biases of these estimators. Given that the empirical variance of an estimator is one of the important metrics for evaluating an estimator, we also obtain the values of the finite-sample variances of the median, Hodges-Lehmann, MAD and Shamos estimators under the standard normal distribution, which are not fully provided in the statistics literature. Numerical results show that the unbiasing factor improves the finite-sample performance of the estimator significantly. In addition, we provide the predicted values for the unbiasing factors obtained by the least squares method which can be used for the case of the sample size more than 100.

The remainder of this paper is organized as follows. In Section 2, we derive the finite-sample breakdown points for robust location estimators and robust scale estimators, respectively. Through using the extensive Monte Carlo simulation, we calculate the empirical biases of the MAD and Shamos estimators in Section 3 and the finite-sample variances of the median, HL, MAD, and Shamos estimators in Section 4. Some concluding remarks are provided in Section 5.

2 Finite-sample breakdown point

In this section, we derive the finite-sample breakdown points for robust location estimators: the sample median and Hodges-Lehmann (HL) estimator in Subsection 2.1 and for robust scale estimators: the MAD and Shamos estimator in Subsection 2.2.

2.1 Robust location estimators

It is well known that the asymptotic breakdown points of the sample median and the HL estimator are 1/2 and , respectively. Note that these estimators are in a closed form and are location-equivariant in the sense that . However, in many cases, the finite-sample breakdown points can be noticeably different from the asymptotic breakdown point, especially when the sample size is small. For instance, when , we observe from equation (1) that the finite-sample breakdown point for the median is 0.4, which is different from its asymptotic breakdown point of 0.5.

Suppose that we have a sample of size , . Then we can make up to of the sample observations arbitrarily large without making the median arbitrarily large. Let be the floor function ( is the largest integer not exceeding ). The finite-sample breakdown point of the median is given by

(1)

Using the fact that can be rewritten as where , we have

The asymptotic breakdown point of the median is obtained by taking the limit of the finite-sample breakdown point as , which provides that .

The HL estimator is defined as the median of all pairwise averages of the sample observations and is given by

Note that the median of all pairwise averages is calculated for , , and . We denoted these three versions as

respectively. In what follows, we first derive the breakdown point for the and then use a similar approach to derive the breakdown point for and .

Suppose that we make of the observations arbitrarily large with . Notice that there are paired average terms (so-called Walsh averages) in the HL3 estimator: , where . Because the HL3 estimator is the median of the values, the finite-sample breakdown point cannot be greater than due to equation (1). If we make of the observations arbitrarily large, then the number of arbitrarily large Walsh averages becomes . These two facts provide the following relationship

which is equivalent to . The finite-sample breakdown point of the is then given by , where

(2)

To obtain an explicit formula for (2), we let . Since , is decreasing for . The roots of are given by and . Since is an integer and , we have , that is Then we have the closed-form finite-sample breakdown point of the

(3)

The asymptotic breakdown point of is given by . Using where , we can rewrite (3) as

where and . Thus, we have .

In the case of the estimator, there are Walsh averages. Since the estimator is the median of the Walsh averages, the finite breakdown point cannot be greater than due to equation (1) again. If we make observations arbitrarily large with , then there are arbitrarily large Walsh averages. Thus, the following inequality holds

which is equivalent to In a similar way as done for equation (2), we let be the largest integer satisfying the above with . For convenience, we let . Then is decreasing for due to and the roots of are given by . Thus, using the similar argument to that used for the case, we have Then we have the closed-form finite-sample breakdown point of the

(4)

It should be noted that we also have .

Similar to the case of the , we obtain that the closed-form finite-sample breakdown point of the estimator is given by

(5)

2.2 Robust scale estimators

For robust scale estimation, we consider the MAD (Hampel, 1974) and the Shamos estimator (Shamos, 1976). The MAD is given by

(6)

where and is needed to make this estimator consistent under the normal distribution (Rousseeuw and Croux, 1993). This resembles the median and its finite-sample breakdown point is the same as that of the median in (1). The Shamos estimator is given by

(7)

where is needed to make this estimator consistent under the normal distribution (Lèvy-Leduc et al., 2011).

Of particular note is that the Shamos estimator resembles the HL1 estimator by replacing the Walsh averages by pairwise differences. Thus, its finite-sample breakdown point is the same as that of the HL1 estimator in (4). In the case of the HL estimator, the median is calculated for , , and , but the median in the Shamos estimator is calculated only for because for . Note that the MAD and Shamos estimators are in a closed form and are scale-equivariant in the sense that . In Table 1, we provide the finite-sample breakdown points of the estimators considered in this paper. Also, we provide the plots of these values in Figure 1.

Figure 1: The finite-sample breakdown points under consideration.

3 Empirical biases

As mentioned above, the MAD in (6) and the Shamos estimator in (7) are normally consistent, that is, as the sample size goes to infinity, it converges to the standard deviation under the normal distribution, . However, when the sample size is small, they have serious biases. In this section, we obtain the unbiasing factors for the MAD and Shamos estimators through the extensive Monte Carlo simulation. It deserves mentioning that the location estimators such as the median and the Hodges-Lehmann estimator have no bias under the normal distribution.

For this simulation, we generated a sample of size from the standard normal distribution, , and calculated the MAD and Shamos estimates. We repeated this simulation ten million times () to obtain the empirical biases of these two estimators. In Table 2, we provide the empirical biases for . We also provide the plot of these empirical biases in Figure 2. Using these biases, we can easily obtain the unbiasing factors as follows. For convenience, let be the empirical bias of the . Then is the unbiasing factor and thus an empirically unbiased MAD is given by

Similarly, an empirically unbiased Shamos estimator is given by

where is the empirical bias of the .

Figure 2: Empirical biases of the MAD and Shamos estimators.

For the case when , we suggest to estimate them as follows. Since the MAD in (6) and Shamos in (7) are normally consistent, and converge to zero as goes to infinity. For a large value of , we suggest to use the methods proposed by Hayes (2014) and Williams (2011). Hayes (2014) suggests the use of and Williams (2011) suggests the use of . Similarly, we can also estimate using and . To estimate these, we obtained more empirical bias in Table 3 for . Using the values for the case of , we can obtain the least squares estimate given by

Also, we can obtain the least squares estimate using the method of Williams (2011) after the logarithm transformation which is given by

Note that Hayes (2014) and Williams (2011) estimated

for the case of odd and even values of

separately. However, for a large value of , the gain in precision may not be noticeable as Figure 2 shows that there is no noticeable difference in the case of the odd and even values of .

We can also obtain the least squares estimate using Hayes (2014) and Williams (2011) for a large value of as follows

and

respectively. In Table 3, we provide the estimated biases of the MAD and Shamos estimators. These results show that the estimated biases are very accurate up to the fourth decimal point. Also, there is no noticeable difference between the two estimates by Hayes (2014) and Williams (2011).

It is well known that the sample standard deviation is not unbiased under the normal distribution. To make it unbiased, the unbiasing factor is widely used so that is unbiased. We suggest to use and notations for the unbiasing factors of the MAD and Shamos estimators, respectively. Then we can obtain the unbiased MAD and Shamos estimators for any value of given by

(8)

where and .

4 Empirical variances

In this section, through the extensive Monte Carlo simulation, we calculate the finite-sample variances of the median, HL, MAD and Shamos estimators under the standard normal distribution. We generated a sample of size from the standard normal distribution and calculated their empirical variances for a given value of . We repeated this simulation ten million times () to obtain the empirical variance for each of .

It should be noted that the values of the asymptotic relative efficiency (ARE) of various estimators are known. Here the ARE is defined as

(9)

where

(10)

where is often a reference or baseline estimator. For example, under the normal distribution, we have , , , and , where is the sample mean and is the sample standard deviation. For more details, see Serfling (2011) and Lèvy-Leduc et al. (2011).

Note that with a random sample of size from the standard normal distribution, we have and , where . Thus, we have , , and for a large value of . We provide these values of each of in Tables 4 and 5. In Figure 3, we also plotted these values.

For the case when , we suggest to estimate these values based on Hayes (2014) or Williams (2011) as we did the biases in the previous section. We suggest the following models to obtain these values for :

and

One can also use the method based on Williams (2011). For brevity, we used the method based on Hayes (2014). To estimate these values for , we obtained the empirical REs in Table 6 for , , , , , , . Notice that Figure 3 indicates that it is reasonable to estimate the values for the median and MAD in the case of odd and even values of separately. Using the large values of , we can estimate the above coefficients. For this, we use the simulation results in Tables 5 and 6. Then the least squares estimates based on the method of Hayes (2014) are given by

and

In Tables 7 and 8, we also calculated the REs of the afore-mentioned estimators for using the above empirical variances. For , one can also easily obtain the REs using the above estimated variances. It should be noted that the REs of the median and HL estimators are one for . When , the median and the HL are essentially the same as the sample mean. Note that the HL1 is not available for .

Another noticeable result is that the RE of the HL1 is exactly one when . When , the HL1 is the median of , , , , and . Then this is the same as the median of , , , , and , where are order statistics. Because and , we have

Thus, the RE of the HL1 should be one. In this case, as expected, the finite-sample breakdown is zero as provided in Table 1.

It should be noted that the and are unbiased for under the normal distribution, but their square values are not unbiased for . Using the empirical and estimated variances, we can obtain the unbiased versions as follows. For convenience, we denote and , where the variances are obtained using a sample of size from the standard normal distribution as mentioned earlier. Since the MAD and Shamos estimators are scale-equivariant, we have and with a sample from the normal distribution . It is immediate from (8) that and . Considering , we have and Thus, the following estimators are unbiased for under the normal distribution

5 Concluding remarks

In this paper, we studied the finite-sample properties of the sample median and Hodges-Lehmann estimators for location and the sample median absolute deviation and Shamos estimators for scale. We first obtained closed-form finite-sample breakdown points for these robust location and scale estimators for the population parameters. We then calculated the unbiasing factors and relative efficiencies of the MAD and the Shamos estimators for the scale parameter through the extensive Monte Carlo simulations up to the sample size 100. The numerical study showed that the unbiasing factor significantly improves the finite-sample performance. In addition, we also provided the predicted values for the unbiasing factors which are obtained by using the least squares method which can be used for the case of sample size more than 100. To facilitate the implementation of the proposed method, we developed the R package library, which will be available at the author’s personal web page.

Acknowledgment

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (No. NRF-2017R1A2B4004169).

References

  • Donoho and Huber (1983) Donoho, D. and Huber, P. J. (1983). The notion of breakdown point. In A Festschrift for Erich L. Lehmann, Wadsworth Statist./Probab. Ser., pages 157–184. Wadsworth, Belmont, CA.
  • Hampel (1974) Hampel, F. R. (1974). The influence curve and its role in robust estimation. Journal of the American Statistical Association, 69:383–393.
  • Hayes (2014) Hayes, K. (2014). Finite-sample bias-correction factors for the median absolute deviation. Communications in Statistics: Simulation and Computation, 43:2205–2212.
  • Hodges and Lehmann (1963) Hodges, J. L. and Lehmann, E. L. (1963). Estimates of location based on rank tests. Annals of Mathematical Statistics, 34:598–611.
  • Lèvy-Leduc et al. (2011) Lèvy-Leduc, C., Boistard, H., Moulines, E., Taqqu, M. S., and Reisen, V. A. (2011). Large sample behaviour of some well-known robust estimators under long-range dependence. Statistics, 45:59–71.
  • Rousseeuw and Croux (1993) Rousseeuw, P. and Croux, C. (1993). Alternatives to the median absolute deviation. Journal of the American Statistical Association, 88:1273–1283.
  • Serfling (2011) Serfling, R. J. (2011). Asymptotic relative efficiency in estimation. In Lovric, M., editor, Encyclopedia of Statistical Science, Part I, pages 68–82. Springer-Verlag, Berlin.
  • Shamos (1976) Shamos, M. I. (1976). Geometry and statistics: Problems at the interface. In Traub, J. F., editor, Algorithms and Complexity: New Directions and Recent Results, pages 251–280. Academic Press, New York.
  • Williams (2011) Williams, D. C. (2011). Finite sample correction factors for several simple robust estimators of normal standard deviation. Journal of Statistical Computation and Simulation, 81:1697–1702.

Appendix: Tables and Figures

median and MAD HL1 and Shamos HL2 HL3
2 0.0000000 0.0000000 0.0000000 0.0000000
3 0.3333333 0.0000000 0.0000000 0.0000000
4 0.2500000 0.0000000 0.2500000 0.2500000
5 0.4000000 0.2000000 0.2000000 0.2000000
6 0.3333333 0.1666667 0.1666667 0.1666667
7 0.4285714 0.1428571 0.2857143 0.2857143
8 0.3750000 0.2500000 0.2500000 0.2500000
9 0.4444444 0.2222222 0.2222222 0.2222222
10 0.4000000 0.2000000 0.3000000 0.2000000
11 0.4545455 0.2727273 0.2727273 0.2727273
12 0.4166667 0.2500000 0.2500000 0.2500000
13 0.4615385 0.2307692 0.2307692 0.2307692
14 0.4285714 0.2142857 0.2857143 0.2857143
15 0.4666667 0.2666667 0.2666667 0.2666667
16 0.4375000 0.2500000 0.2500000 0.2500000
17 0.4705882 0.2352941 0.2941176 0.2352941
18 0.4444444 0.2777778 0.2777778 0.2777778
19 0.4736842 0.2631579 0.2631579 0.2631579
20 0.4500000 0.2500000 0.2500000 0.2500000
21 0.4761905 0.2380952 0.2857143 0.2857143
22 0.4545455 0.2727273 0.2727273 0.2727273
23 0.4782609 0.2608696 0.2608696 0.2608696
24 0.4583333 0.2500000 0.2916667 0.2916667
25 0.4800000 0.2800000 0.2800000 0.2800000
26 0.4615385 0.2692308 0.2692308 0.2692308
27 0.4814815 0.2592593 0.2962963 0.2592593
28 0.4642857 0.2857143 0.2857143 0.2857143
29 0.4827586 0.2758621 0.2758621 0.2758621
30 0.4666667 0.2666667 0.2666667 0.2666667
31 0.4838710 0.2580645 0.2903226 0.2903226
32 0.4687500 0.2812500 0.2812500 0.2812500
33 0.4848485 0.2727273 0.2727273 0.2727273
34 0.4705882 0.2647059 0.2941176 0.2647059
35 0.4857143 0.2857143 0.2857143 0.2857143
36 0.4722222 0.2777778 0.2777778 0.2777778
37 0.4864865 0.2702703 0.2702703 0.2702703
38 0.4736842 0.2631579 0.2894737 0.2894737
39 0.4871795 0.2820513 0.2820513 0.2820513
40 0.4750000 0.2750000 0.2750000 0.2750000
41 0.4878049 0.2682927 0.2926829 0.2926829
42 0.4761905 0.2857143 0.2857143 0.2857143
43 0.4883721 0.2790698 0.2790698 0.2790698
44 0.4772727 0.2727273 0.2954545 0.2727273
45 0.4888889 0.2888889 0.2888889 0.2888889
46 0.4782609 0.2826087 0.2826087 0.2826087
47 0.4893617 0.2765957 0.2765957 0.2765957
48 0.4791667 0.2708333 0.2916667 0.2916667
49 0.4897959 0.2857143 0.2857143 0.2857143
50 0.4800000 0.2800000 0.2800000 0.2800000
Table 1: Finite-sample breakdown points.
MAD Shamos MAD Shamos
1 NA NA 51
2 52
3 53
4 54
5 55
6 56
7 57
8 58
9 59
10 60
11 61
12 62
13 63
14 64
15 65
16 66
17 67
18 68
19 69
20 70
21 71
22 72
23 73
24 74
25 75
26 76
27 77
28 78
29 79
30 80
31 81
32 82
33 83
34 84
35 85
36 86
37 87
38 88
39 89
40 90
41 91
42 92
43 93
44 94
45 95
46 96
47 97
48 98
49 99
50 100
Table 2: Empirical biases of the MAD and Shamos estimators ().
MAD (Hayes) (Williams) Shamos (Hayes) (Williams)
109 0.0038374 0.0038377 0.0038425
110 0.0037996 0.0038025 0.0038073
119 0.0034984 0.0035124 0.0035170
120 0.0034691 0.0034828 0.0034875
129 0.0032441 0.0032379 0.0032422
130 0.0032241 0.0032127 0.0032170
139 0.0029854 0.0030031 0.0030070
140 0.0029548 0.0029815 0.0029854
149 0.0028230 0.0028002 0.0028036
150 0.0028080 0.0027814 0.0027847
159 0.0026355 0.0026229 0.0026258
160 0.0026154 0.0026064 0.0026093
169 0.0024503 0.0024667 0.0024692
170 0.0024402 0.0024521 0.0024545
179 0.0023257 0.0023281 0.0023301
180 0.0023122 0.0023151 0.0023170
189 0.0021780 0.0022042 0.0022058
190 0.0021673 0.0021925 0.0021941
199 0.0020904 0.0020928 0.0020940
200 0.0020786 0.0020823 0.0020835
249 0.0016628 0.0016708 0.0016704
250 0.0016562 0.0016641 0.0016636
299 0.0013875 0.0013904 0.0013889
300 0.0013822 0.0013858 0.0013842
349 0.0012033 0.0011906 0.0011884
350 0.0012013 0.0011872 0.0011849
399 0.0010331 0.0010410 0.0010383
400 0.0010287 0.0010384 0.0010357
449 0.0009196 0.0009248 0.0009217
450 0.0009170 0.0009227 0.0009197
499 0.0008413 0.0008319 0.0008286
500 0.0008389 0.0008303 0.0008270
Table 3: Empirical biases of the MAD and Shamos estimators along with their estimates ().
median HL1 HL2 HL3 MAD Shamos
1 1.0000 NA 1.0000 1.0000 NA NA
2 1.0000 1.0000 1.0000 1.0000 1.1000 2.2001
3 1.3463 1.0871 1.0221 1.0871 1.4372 2.3812
4 1.1930 1.0000 1.0949 1.0949 1.1680 1.6996
5 1.4339 1.0617 1.0754 1.0754 1.9809 1.8573
6 1.2882 1.0619 1.0759 1.0602 1.6859 1.7883
7 1.4736 1.0630 1.0814 1.0756 2.2125 1.6180
8 1.3459 1.0628 1.0728 1.0705 1.9486 1.5824
9 1.4957 1.0588 1.0756 1.0678 2.3326 1.5109
10 1.3833 1.0608 1.0743 1.0641 2.1072 1.4855
11 1.5088 1.0602 1.0693 1.0649 2.4082 1.4643
12 1.4087 1.0560 1.0670 1.0614 2.2112 1.4234
13 1.5195 1.0567 1.0685 1.0629 2.4570 1.4008
14 1.4298 1.0565 1.0663 1.0603 2.2848 1.3905
15 1.5249 1.0562 1.0645 1.0603 2.4952 1.3719
16 1.4457 1.0547 1.0637 1.0590 2.3412 1.3554
17 1.5302 1.0541 1.0633 1.0587 2.5217 1.3434
18 1.4585 1.0540 1.0621 1.0574 2.3846 1.3355
19 1.5333 1.0532 1.0605 1.0567 2.5447 1.3249
20 1.4702 1.0545 1.0620 1.0581 2.4185 1.3146
21 1.5383 1.0536 1.0611 1.0573 2.5611 1.3079
22 1.4770 1.0527 1.0596 1.0557 2.4475 1.3015
23 1.5420 1.0532 1.0597 1.0564 2.5758 1.2953
24 1.4850 1.0529 1.0594 1.0560 2.4699 1.2883
25 1.5438 1.0521 1.0586 1.0553 2.5873 1.2825
26 1.4896 1.0518 1.0578 1.0545 2.4886 1.2776
27 1.5462 1.0526 1.0582 1.0553 2.5960 1.2731
28 1.4954 1.0511 1.0567 1.0538 2.5030 1.2676
29 1.5476 1.0525 1.0581 1.0552 2.6070 1.2650
30 1.5005 1.0518 1.0571 1.0543 2.5199 1.2616
31 1.5482 1.0514 1.0564 1.0538 2.6132 1.2586
32 1.5057 1.0517 1.0567 1.0541 2.5335 1.2552
33 1.5516 1.0521 1.0571 1.0545 2.6208 1.2519
34 1.5091 1.0512 1.0560 1.0534 2.5442 1.2493
35 1.5515 1.0508 1.0554 1.0530 2.6285 1.2466
36 1.5123 1.0512 1.0557 1.0534 2.5545 1.2433
37 1.5531 1.0512 1.0556 1.0534 2.6332 1.2415
38 1.5148 1.0502 1.0545 1.0522 2.5637 1.2393
39 1.5550 1.0513 1.0555 1.0533 2.6344 1.2364
40 1.5173 1.0507 1.0547 1.0526 2.5720 1.2357
41 1.5532 1.0498 1.0539 1.0518 2.6403 1.2325
42 1.5206 1.0510 1.0549 1.0528 2.5780 1.2315
43 1.5552 1.0504 1.0541 1.0522 2.6436 1.2287
44 1.5224 1.0500 1.0537 1.0518 2.5869 1.2284
45 1.5568 1.0504 1.0541 1.0522 2.6477 1.2260
46 1.5240 1.0496 1.0533 1.0514 2.5904 1.2248
47 1.5570 1.0504 1.0539 1.0521 2.6511 1.2232
48 1.5249 1.0493 1.0528 1.0510 2.5960 1.2214
49 1.5562 1.0495 1.0529 1.0512 2.6537 1.2199
50 1.5267 1.0499 1.0532 1.0514 2.6014 1.2184
Table 4: The values of for the median and the Hodges-Lehmann, and the values of for the MAD and Shamos ().
median HL1 HL2 HL3 MAD Shamos
51 1.5583 1.0502 1.0534 1.0517 2.6577 1.2199
52 1.5298 1.0499 1.0532 1.0515 2.6053 1.2174
53 1.5592 1.0501 1.0533 1.0517 2.6568 1.2160
54 1.5298 1.0489 1.0519 1.0503 2.6125 1.2156
55 1.5584 1.0493 1.0523 1.0508 2.6631 1.2144
56 1.5330 1.0497 1.0527 1.0512 2.6139 1.2132
57 1.5589 1.0496 1.0526 1.0510 2.6649 1.2126
58 1.5337 1.0495 1.0524 1.0509 2.6161 1.2098
59 1.5598 1.0501 1.0530 1.0515 2.6671 1.2095
60 1.5349 1.0489 1.0517 1.0503 2.6219 1.2095
61 1.5594 1.0492 1.0519 1.0505 2.6667 1.2073
62 1.5361 1.0492 1.0520 1.0505 2.6235 1.2071
63 1.5594 1.0485 1.0512 1.0498 2.6695 1.2064
64 1.5373 1.0494 1.0521 1.0507 2.6260 1.2050
65 1.5598 1.0488 1.0514 1.0500 2.6731 1.2067
66 1.5380 1.0496 1.0521 1.0508 2.6297 1.2036
67 1.5606 1.0494 1.0519 1.0506 2.6722 1.2034
68 1.5389 1.0491 1.0516 1.0503 2.6341 1.2030
69 1.5607 1.0479 1.0504 1.0491 2.6748 1.2025
70 1.5399 1.0490 1.0514 1.0502 2.6351 1.2016
71 1.5595 1.0482 1.0506 1.0494 2.6738 1.2005
72 1.5410 1.0491 1.0515 1.0503 2.6351 1.1993
73 1.5622 1.0492 1.0515 1.0503 2.6754 1.1993
74 1.5426 1.0498 1.0521 1.0510 2.6395 1.1990
75 1.5619 1.0489 1.0512 1.0500 2.6763 1.1985
76 1.5415 1.0486 1.0509 1.0497 2.6411 1.1975
77 1.5616 1.0485 1.0508 1.0496 2.6780 1.1975
78 1.5434 1.0494 1.0516 1.0505 2.6453 1.1971
79 1.5639 1.0493 1.0515 1.0504 2.6794 1.1968
80 1.5445 1.0497 1.0519 1.0508 2.6453 1.1958
81 1.5612 1.0486 1.0507 1.0496 2.6815 1.1960
82 1.5444 1.0494 1.0515 1.0504 2.6472 1.1947
83 1.5626 1.0484 1.0505 1.0494 2.6815 1.1947
84 1.5449 1.0490 1.0511 1.0500 2.6475 1.1939
85 1.5630 1.0484 1.0504 1.0494 2.6831 1.1938
86 1.5441 1.0479 1.0499 1.0489 2.6505 1.1931
87 1.5643 1.0495 1.0514 1.0504 2.6830 1.1923
88 1.5448 1.0478 1.0497 1.0487 2.6535 1.1929
89 1.5640 1.0487 1.0506 1.0496 2.6857 1.1931
90 1.5463 1.0483 1.0503 1.0493 2.6562 1.1920
91 1.5634 1.0486 1.0505 1.0495 2.6853 1.1914
92 1.5477 1.0491 1.0509 1.0500 2.6567 1.1913
93 1.5631 1.0481 1.0500 1.0490 2.6859 1.1906
94 1.5482 1.0488 1.0507 1.0497 2.6584 1.1907
95 1.5629 1.0481 1.0499 1.0490 2.6878 1.1905
96 1.5466 1.0477 1.0495 1.0486 2.6576 1.1894
97 1.5636 1.0480 1.0498 1.0489 2.6881 1.1895
98 1.5477 1.0477 1.0495 1.0486 2.6613 1.1899
99 1.5642 1.0483 1.0501 1.0492 2.6888 1.1887
100 1.5484 1.0481 1.0498 1.0489 2.6604 1.1874
Table 5: The values of for the median and the Hodges-Lehmann, and the values of for the MAD and Shamos ().
median HL1 HL2 HL3 MAD Shamos
109 1.5655 1.0490 1.0506 1.0498 2.6889 1.1857
110 1.5508 1.0484 1.0500 1.0492 2.6657 1.1856
119 1.5651 1.0478 1.0492 1.0485 2.6936 1.1830
120 1.5526 1.0478 1.0493 1.0486 2.6717 1.1836
129 1.5661 1.0477 1.0490 1.0483 2.6953 1.1809
130 1.5541 1.0478 1.0492 1.0485 2.6727 1.1804
139 1.5671 1.0491 1.0503 1.0497 2.6963 1.1792
140 1.5567 1.0495 1.0508 1.0502 2.6770 1.1794
149 1.5666 1.0484 1.0496 1.0490 2.7008 1.1789
150 1.5566 1.0484 1.0496 1.0490 2.6815 1.1788
159 1.5673 1.0484 1.0495 1.0490 2.7006 1.1768
160 1.5584 1.0485 1.0495 1.0490 2.6827 1.1765
169 1.5661 1.0474 1.0485 1.0479 2.7012 1.1757
170 1.5578 1.0476 1.0486 1.0481 2.6861 1.1755
179 1.5676 1.0477 1.0487 1.0482 2.7038 1.1750
180 1.5590 1.0480 1.0490 1.0485 2.6889 1.1750
189 1.5663 1.0473 1.0483 1.0478 2.7043 1.1743
190 1.5584 1.0473 1.0482 1.0478 2.6903 1.1741
199 1.5681 1.0481 1.0490 1.0486 2.7049 1.1741
200 1.5608 1.0482 1.0491 1.0486 2.6904 1.1732
249 1.5679 1.0472 1.0479 1.0476 2.7083 1.1705
250 1.5623 1.0479 1.0486 1.0483 2.6977 1.1709
299 1.5689 1.0477 1.0483 1.0480 2.7084 1.1673
300 1.5642 1.0479 1.0485 1.0482 2.6986 1.1670
349 1.5700 1.0479 1.0484 1.0481 2.7131 1.1673
350 1.5654 1.0479 1.0484 1.0481 2.7049 1.1675
399 1.5691 1.0475 1.0479 1.0477 2.7126 1.1650
400 1.5646 1.0469 1.0474 1.0472 2.7072 1.1651
449 1.5694 1.0474 1.0478 1.0476 2.7125 1.1639
450 1.5659 1.0475 1.0479 1.0477 2.7056 1.1645
499 1.5701 1.0475 1.0479 1.0477 2.7147 1.1637
500 1.5674 1.0482 1.0486 1.0484 2.7101 1.1646
Table 6: The values of for the median and the Hodges-Lehmann, and the values of for the MAD and Shamos ().
median HL1 HL2 HL3 MAD Shamos
1 1.0000 NA 1.0000 1.0000 NA NA
2 1.0000 1.0000 1.0000 1.0000 0.9091 0.4545
3 0.7427 0.9199 0.9784 0.9199 0.6958 0.4199
4 0.8382 1.0000 0.9133 0.9133 0.8562 0.5884
5 0.6974 0.9419 0.9299 0.9299 0.5048 0.5384
6 0.7763 0.9417 0.9295 0.9432 0.5932 0.5592
7 0.6786 0.9407 0.9248 0.9297 0.4520 0.6180
8 0.7430 0.9409 0.9322 0.9342 0.5132 0.6320
9 0.6686 0.9445 0.9297 0.9365 0.4287 0.6618
10 0.7229 0.9426 0.9308 0.9398 0.4746 0.6732
11 0.6628 0.9432 0.9352 0.9391 0.4153 0.6829
12 0.7098 0.9470 0.9372 0.9422 0.4522 0.7026
13 0.6581 0.9464 0.9359 0.9408 0.4070 0.7139
14 0.6994 0.9465 0.9378 0.9432 0.4377 0.7192
15 0.6558 0.9468 0.9394 0.9432 0.4008 0.7289
16 0.6917 0.9482 0.9402 0.9443 0.4271 0.7378
17 0.6535 0.9486 0.9405 0.9445 0.3966 0.7444
18 0.6856 0.9487 0.9415 0.9457 0.4194 0.7488
19 0.6522 0.9495 0.9430 0.9463 0.3930 0.7547
20 0.6802 0.9483 0.9416 0.9451 0.4135 0.7607
21 0.6501 0.9491 0.9424 0.9458 0.3905 0.7646
22 0.6770 0.9499 0.9437 0.9472 0.4086 0.7684
23 0.6485 0.9495 0.9437 0.9466 0.3882 0.7720
24 0.6734 0.9498 0.9440 0.9470 0.4049 0.7762
25 0.6478 0.9504 0.9447 0.9476 0.3865 0.7798
26 0.6713 0.9507 0.9454 0.9483 0.4018 0.7827
27 0.6468 0.9501 0.9450 0.9476 0.3852 0.7855
28 0.6687 0.9514 0.9463 0.9490 0.3995 0.7889
29 0.6462 0.9501 0.9451 0.9477 0.3836 0.7905
30 0.6664 0.9507 0.9459 0.9485 0.3968 0.7926
31 0.6459 0.9511 0.9466 0.9489 0.3827 0.7945
32 0.6641 0.9508 0.9463 0.9486 0.3947 0.7967
33 0.6445 0.9505 0.9460 0.9483 0.3816 0.7988
34 0.6627 0.9513 0.9470 0.9493 0.3931 0.8004
35 0.6445 0.9516 0.9475 0.9496 0.3804 0.8022
36 0.6612 0.9513 0.9472 0.9493 0.3915 0.8043
37 0.6439 0.9513 0.9473 0.9493 0.3798 0.8055
38 0.6601 0.9522 0.9483 0.9504 0.3901 0.8069
39 0.6431 0.9512 0.9475 0.9494 0.3796 0.8088
40 0.6591 0.9518 0.9481 0.9500 0.3888 0.8093
41 0.6438 0.9525 0.9489 0.9507 0.3787 0.8113
42 0.6576 0.9515 0.9479 0.9498 0.3879 0.8120
43 0.6430 0.9521 0.9486 0.9504 0.3783 0.8138
44 0.6569 0.9524 0.9490 0.9508 0.3866 0.8141
45 0.6423 0.9520 0.9487 0.9504 0.3777 0.8157
46 0.6562 0.9527 0.9494 0.9511 0.3860 0.8164
47 0.6422 0.9520 0.9489 0.9505 0.3772 0.8175
48 0.6558 0.9530 0.9499 0.9515 0.3852 0.8187
49 0.6426 0.9529 0.9497 0.9513 0.3768 0.8197
50 0.6550 0.9525 0.9495 0.9511 0.3844 0.8208
Table 7: Relative efficiencies of the median, Hodges-Lehmann, MAD and Shamos estimators to the sample mean under the normal distribution ().
median HL1 HL2 HL3 MAD Shamos
51 0.6417 0.9522 0.9493 0.9508 0.3763 0.8197
52 0.6537 0.9525 0.9495 0.9510 0.3838 0.8214
53 0.6414 0.9523 0.9494 0.9509 0.3764 0.8223
54 0.6537 0.9534 0.9506 0.9521 0.3828 0.8226
55 0.6417 0.9530 0.9503 0.9517 0.3755 0.8234
56 0.6523 0.9527 0.9499 0.9513 0.3826 0.8243
57 0.6415 0.9528 0.9501 0.9514 0.3752 0.8247
58 0.6520 0.9528 0.9502 0.9515 0.3822 0.8266
59 0.6411 0.9523 0.9497 0.9510 0.3749 0.8268
60 0.6515 0.9534 0.9508 0.9521 0.3814 0.8268
61 0.6413 0.9531 0.9506 0.9519 0.3750 0.8283
62 0.6510 0.9531 0.9506 0.9519 0.3812 0.8284
63 0.6413 0.9538 0.9513 0.9526 0.3746 0.8289
64 0.6505 0.9529 0.9505 0.9517 0.3808 0.8299
65 0.6411 0.9535 0.9511 0.9523 0.3741 0.8287
66 0.6502 0.9528 0.9505 0.9516 0.3803 0.8308
67 0.6408 0.9529 0.9507 0.9518 0.3742 0.8310
68 0.6498 0.9532 0.9509 0.9521 0.3796 0.8312
69 0.6408 0.9543 0.9520 0.9532 0.3739 0.8316
70 0.6494 0.9533 0.9511 0.9522 0.3795 0.8323
71 0.6412 0.9540 0.9518 0.9529 0.3740 0.8330
72 0.6489 0.9532 0.9510 0.9521 0.3795 0.8338
73 0.6401 0.9532 0.9510 0.9521 0.3738 0.8338
74 0.6483 0.9525 0.9504 0.9515 0.3789 0.8341
75