The problem of combining many unbiased estimates of a parameter to form a single estimate arises throughout statistics. A common way to perform aggregation when the distribution of is unknown is via the sample mean,
. However, this can often lead to poor performance if the underlying distribution is very skewed or heavy-tailed.
For example, the importance sampling estimate of an integral is formed by taking samples , letting and combining the estimates using the sample mean, . The estimator is unbiased and guaranteed to converge almost surely to , as long as
exists. Still, in many cases this ratio estimator has a complicated distribution with extremely high or infinite variance, making the estimates unreliable. Similar situations arise in various modern statistical procedures such as cross-validation, bagging and random forests.
If the underlying distribution of were known, aggregation could be performed using maximum likelihood, which enjoys great theoretical properties. In general, both its bias and variance decrease as , so some amount of bias is introduced at the expense of variance reduction. However, since mean squared error can be decomposed as the sum of bias squared and variance, the bias component is asymptotically negligible compared to the variance. In the problem at hand, the distribution of is not known, but one can still look for a non-parametric estimator that similarly introduces a small, asymptotically negligible bias, to obtain a significant variance reduction relative to the sample mean.
Given many unbiased estimators , consider the following procedure:
draw for ;
compute , for ;
denotes the Dirichlet distribution with parameter vector.
This procedure, referred to as the Bayesian median of means, is detailed in Section 4. Since the median and mean of a symmetric distribution coincide, one can think of this estimator as symmetrizing via averaging before applying the median for added robustness. Section 4.2 studies the theoretical properties of this estimator, in particular proving that, when , the squared bias grows as while the variance only grows as , so asymptotically the amount of bias introduced is insignificant; finite-sample guarantees are also provided, as well as consistency results. Section 5 investigates the empirical performance of this estimator in many different settings, showing that in general the trade-off is well worth it.
The procedure only has one hyperparameter, the Dirichlet concentration level. When and is large the Bayesian median of means approximates the sample median, whereas when it approximates the sample mean. Thus, can be thought of as controlling where the procedure stands between mean and median. In Proposition 22, it is shown that the Bayesian median of means is first-order equivalent to a skewness-corrected sample mean, with the correction controlled by , suggesting an approximate, non-randomized version of the procedure above, dubbed the approximate Bayesian median of means:
where is the sample variance of and is the sample skewness. Section 4.3 argues that gives a good compromise between robustness and efficiency.
In many of the results below, are regarded as fixed, and the statistical procedures are analyzed conditionally. This is intended to reproduce the common setting in which the unbiased estimators have been obtained before the Bayesian median of means is used. It also implies some of the analyses hold even if are not independent, which is the case for many important applications, such as cross-validation. Regarding as fixed also bypasses the issue of computation, since in some cases the computational effort of drawing Dirichlet samples could be directed to obtaining further data points, as in Example 1 below. When obtaining more points is relatively cheap, the approximate Bayesian median of means remains an attractive alternative that does not require further sampling.
Example 1 (Importance sampling).
As a first example, consider the importance sampling setting discussed above. Let
be independent and identically distributed Exponential random variables, to be used in estimating the mean of, that is, . Given a sample , one forms the importance sampling terms , and estimates . This sample mean will be compared to the Bayesian median of means estimator, , and its approximation, , formed using the procedures outlined above.
Figure 1 shows the mean squared error (MSE) and mean absolute deviation (MAD) of both procedures, with values of , equispaced between and . This ranges from easy settings (when ) to complicated ones (). In particular, when the variance of becomes infinite. Here, the number of importance sampling draws is , the number of Dirichlet draws is , and . For each the estimator is computed times to estimate the errors. Note the Bayesian median of means requires sampling additional Dirichlet random variables, and this extra computational allowance could alternatively have been used to generate more points ; the approximate algorithm, on the other hand, requires no extra sampling and exhibits similar performance.
In this simulation, the MSE of the Bayesian median of means and its approximation were smaller than the importance sampling estimator’s 30 out of 30 times. The same was true for mean absolute deviation (MAD). For small values of both estimators perform comparably. As
increases, it is easier to see the stability brought about by the Bayesian median of means makes it significantly better. In all 30 instances, the usual importance sampling estimator had larger standard deviation but smaller bias than both procedures, displaying the bias-variance trade-off involved. While considering values ofcloser to 1 should favor the usual estimator, taking 30 equispaced values of between and still sees better MSE and MAD performance by the Bayesian median of means 29 and 28 times; the approximate version is still better 30 and 29 times. Section 5.3 considers more extensive simulations and real data examples, reaching similar conclusions. ∎
Paper structure. Section 2 reviews the literature and similar attempts in using the median to add robustness to mean estimates. Section 3 introduces the general idea of symmetrizing estimates before using the median to estimate location parameters, and investigates to what degree that can help decrease mean squared error. Section 4 settles on a particular distribution for symmetrizing estimates, giving rise to the Bayesian median of means. It also analyzes its theoretical properties and gives both asymptotic and finite-sample guarantees, and presents the (non-randomized) approximate Bayesian median of means. Finally, it also considers different ways of setting , the only hyperparameter in the procedure. Section 5 looks at the performance of these estimators in a myriad settings, including both real and simulated data, in particular comparing them to the sample mean. Finally, Section 6 concludes by giving further research directions.
2 Related Work
The idea of combining mean and median estimators has been visited several times in the statistical robustness literature, particularly for the estimation of location parameters in symmetric distributions. For instance, [Lai et al., 1983] propose an adaptive estimator that picks either the sample mean or median to estimate the center of a symmetric distribution, while [Damilano and Puig, 2004] and [Chan and He, 1994] investigate using linear combinations of mean and medians, with weights picked according to asymptotic criteria.
More recently, there has been intense work on the so-called median of means estimator (see [Alon et al., 1999], [Jerrum et al., 1986], [Nemirovsky and Yudin, 1983]). Given independent and identically distributed random variables , the median of means estimator for is given by dividing the data into blocks with elements and estimating
with minor adjustments if is not an integer. For instance, [Devroye et al., 2016] discuss the mean estimation problem from a non-asymptotic perspective and show that the median of means estimator, among others, can outperform the sample mean in terms of concentration bounds, via the following proposition, proved in the same paper.
Such concentration cannot be achieved by the sample mean in general, unless stronger hypotheses, such as sub-Gaussianity of , are assumed. Variants of this estimator are further analyzed in the heavy-tailed setting of [Brownlees et al., 2015], [Hsu and Sabato, 2016] and [Bubeck et al., 2013]. The estimator was also used to combine Bayesian posterior updates in split datasets in [Minsker et al., 2014].
Unfortunately, however, there are practical challenges in using the median of means estimator. First, there is little guidance in how to pick the number of groups, which essentially amounts to the estimator’s willingness to trade off bias for variance. Furthermore, in spite of its theoretical properties, the median of means underutilizes the data available by only using each datapoint once. This guarantees independence between blocks, but limits the number of means one can obtain in a given dataset. This requirement can be relaxed to a degree, but not completely (see [Joly and Lugosi, 2016]). The estimator considered here has no such restrictions, and can be viewed as a smoothed version of the median of means. Besides, the randomness introduced in sampling the blocks allow for probabilistic analyses and parameter choices conditional on the realized datapoints. A further benefit is that, unlike the median of means, its smoothed counterpart does not depend on the order of the datapoints while still being computationally tractable.
In fact, the median of means itself can be cast as a computation compromise on the celebrated Hodges-Lehmann estimator proposed in [Hodges Jr and Lehmann, 1963]:
Many theoretical properties are known about it; for instance, it has an asymptotic breakdown point of , meaning a contamination of up to of the data taking arbitrary value still leaves the estimator bounded (unlike the sample mean, which has an asymptotic breakdown point of 0). However, generalizing it to -averages in a straightforward way, with , is harder both in theoretical and computational terms.
Closer in spirit to the present work is the suggestion in [Bühlmann, 2003] to compute sample means using bootstrap samples and then aggregate them via the median. This is similar to bagging estimators , but using the median instead of the mean in assembling the averages. In particular, [Bühlmann, 2003] empirically observed that the median helped make the final estimators better in terms of MSE in non-convex problems. This work generalizes [Bühlmann, 2003] by resampling the weights used in the averaging using a Dirichlet distribution and by providing a more extensive theoretical analysis, in particular, deriving a non-randomized approximation. This leads to a smoother estimator, particularly relevant in the heavy-tailed or skewed setting, and also allows for theoretical results and hyperparameter recommendations based on the literature on Dirichlet means (see [Cifarelli and Regazzini, 1990], [Cifarelli and Melilli, 2000], [Regazzini et al., 2002] and [Diaconis and Kemperman, 1996]) and, more generally, on -means ([Pitman, 2018]).
3 Median of Weighted Means
Given a probability space , let , be a collection of independent and identically distributed random variables, with and . This will be denoted below by . Consider the problem of obtaining an estimator for given observations .
A common aggregation procedure is the sample mean, . Besides retaining unbiasedness, it also possesses many satisfying theoretical properties; for example, the sample mean is the best linear unbiased estimator for the population mean, minimizes the maximum expected mean squared error populations with bounded variance (see [Bickel and Lehmann, 1981]), is an admissible procedure in one dimension ([Joshi, 1968]), and is the maximum likelihood estimate in exponential families under independent sampling. For these reasons, as well as computationally simplicity, the sample mean is a widely adopted non-parametric procedure for aggregating one-dimensional estimators.
However, in many settings the underlying distribution of is very skewed or heavy tailed, in which case the sample mean becomes highly unstable. A robust solution to the aggregation problem is to use the sample median. If the distribution is symmetric, then mean and median coincide, the sample median is still an unbiased estimator, and the median can have better asymptotic mean squared error than the bias, as in Example 3.
Example 3 (Mean and median).
Let , and consider estimators and , which are unbiased for estimating . In this case, the asymptotic mean squared error is given by the asymptotic variance. Since as , and also (see Proposition 11 below), it holds that, asymptotically, , so the sample mean exhibits better asymptotic performance.
On the other hand, consider . As before, both and are unbiased for estimating . In particular, note is the maximum likelihood estimator. When , while , so , and the median is asymptotically better than the sample mean. In more extreme cases, say if , the median can be asymptotically infinitely better than the sample mean.
One can try to use a compromise between the two estimators, for instance the Hodges-Lehmann estimator, , defined in (2), which is still unbiased for symmetric distributions. Asymptotically, with a Normal sample, , and with a Laplace sample, . In fact, if is the set of symmetric distributions centered at , it can be shown that asymptotically (see [Hodges et al., 1956]), so never fares much worse than but can sometimes do much better. ∎
In general, however, the median will be a biased estimator. For instance, if the distribution of is highly skewed, might be very different from . One way to soften the bias is to take the median of weighted averages of , which are more symmetric around
by the Central Limit Theorem. Hence, the application of the median becomes less costly in terms of bias, while still guaranteeing increased robustness.
A general scheme for aggregating many unbiased estimators using a vector of probabilities , with , , is as follows:
where is a probability measure such that .
Consider the following choices for :
if sets for chosen uniformly at random and is sufficiently large, then is essentially the sample median;
if is a delta mass at , then is the sample mean;
if selects sets of size in a uniformly chosen partition of and sets for and otherwise, then is a randomly-partitioned median of means estimator;
if is , then the resulting estimator is the Bayesian median of means, .
Hence, estimators with randomized weights give a way to interpolate between the sample mean, with low bias and possibly high variance, and the sample median, with low variance but possibly high bias. See Figure 2. The two extreme choices of leading to the sample mean and sample median come from degenerate Dirichlet distributions placing all mass at the centroid of the simplex (), or splitting the mass equally among the vertices of the simplex (). More general distributions over the simplex are possible, and many of the results in this paper can readily be extended to that case (see [Newton and Raftery, 1994] and [Pitman, 2018]). In particular, distributions can be picked to encode any prior information available about the sample such as skewness or symmetry.
The reduction in variance achieved by the median of weighted means can have a drastic effect on mean squared error when come from a distribution with high variance. This can be understood through the following proposition.
Let be iid, unbiased estimates of a parameter . Let be the sample mean, and be any median of weighted means estimator (4). Then the mean squared error of can be bounded by
where the expectation is over both the data and the weights .
Decompose the expectation as
and note the cross-term can be written
where is the correlation between and . Putting it together,
and since is unbiased, , so
Thus, the mean squared error of is given by a term measuring the discrepancy between and , minus a term measuring the variance reduction achieved by with respect to . If are coming from a distribution with high variance, then generally , and the second term in the right-hand side of the bound is negative and very large. If, on the other hand, are coming from a distribution with low variance, then , and there should be no significant differences between the mean squared errors of the two estimators.
4 Bayesian Median of Means
Now consider the median of weighted means estimator obtained by sampling the probabilities in (3) from a distribution. Recall this scheme is broad enough to interpolate between median and mean, while still being analytically tractable, and is given by
draw for ;
compute , for ;
This estimator is ‘Bayesian’ in the sense that the probabilities are being generated according to the Bayesian bootstrap. Indeed, consider weights with , and , and assume the following underlying model
so the posterior distribution is
With a non-informative prior, , the posterior distribution becomes , which amounts to step 1 above with . In particular, if , it is .
This gives a posterior on the sums for a randomly sampled probability vector and fixed . Summarizing the posterior distribution by minimizing the loss for robustness yields the Bayesian median of means.
Note the usual bootstrap would sample . This has the same mean as , and nearly the same variance. The main reason for using the Dirichlet distribution is that it confers additional smoothness to the estimator that are important in establishing theoretical results, in particular asymptotic expansions (see Section 4.2.3).
How can this estimator improve on the sample mean? Proposition 22 below shows that, under some regularity assumption and assuming , one can approximate,
where and , so to first-order the Bayesian median of means is a corrected sample mean, with the correction proportional to the sample standard deviation and skewness. This is reminiscent of a shrinkage-type estimator.
Indeed, if are coming from a symmetric distribution, then , so consider the case where the underlying distribution is very right-skewed, as in the first plot in Figure 3. Assume two samples of points are obtained, as represented in red and blue in the figure. Note the blue sample happens to include a very large, but unlikely, sample point.
The second plot in Figure 3 shows the result of applying both the sample mean and the Bayesian median of means to the red and blue samples. Since the red sample is virtually symmetric, and so . For the blue sample, the large sample point means the sample mean overestimates by a lot; in this case, both and are large and positive, so , as represented in Figure 3.
This schematic example explains the mechanics behind the Bayesian median of means: it introduces some bias for samples with large sample variance and skewness, and in doing so greatly reduces variance. When the underlying distribution of has small variance or is symmetric, and are virtually indistinguishable; however, when the underlying distribution has heavy tails or is very skewed, the estimators disagree (this phenomena was also observed in Example 1
). Hence, the Bayesian median of means matches the sample mean performance for samples coming from approximately Normal distributions, but incurs in corrections once this is no longer the case.
To establish the theoretical properties of this estimator, Section 4.1 first looks at the distribution of the resampled averages , . Section 4.2 establishes finite-sample guarantees for the Bayesian median of means, as well as asymptotic approximations. Finally, Section 4.3 considers the issue of picking a value for the hyperparameter .
4.1 Conditional Moments and Density for
To understand the behavior of the Bayesian median of means, it is first necessary to study
. In particular, it will be important to characterize the moments and distribution of(the subscript will be dropped when the meaning is clear). Calculating the first two conditional moments of is straightforward:
where , so
Higher conditional moments can be found in a recursive fashion.
Let and be a set of fixed of estimates. The linear combination of Dirichlet components has moments given by the recursion
First, recall that if , then
Using below to denote the expectation conditional on , write
Using (12), this becomes
The first term in the last equality above is just
while the expectation in the second term is
so the second term becomes
where the last equality follows by applying the argument above with instead of . Putting it all together,