Combining posterior samples from multiple subsets
With the rapidly growing scales of statistical problems, subset based communication-free parallel MCMC methods are a promising future for large scale Bayesian analysis. In this article, we propose a new Weierstrass sampler for parallel MCMC based on independent subsets. The new sampler approximates the full data posterior samples via combining the posterior draws from independent subset MCMC chains, and thus enjoys a higher computational efficiency. We show that the approximation error for the Weierstrass sampler is bounded by some tuning parameters and provide suggestions for choice of the values. Simulation study shows the Weierstrass sampler is very competitive compared to other methods for combining MCMC chains generated for subsets, including averaging and kernel smoothing.READ FULL TEXT VIEW PDF
The modern scale of data has brought new challenges to Bayesian inferenc...
We propose a novel approximate inference algorithm that approximates a t...
Most organisms exhibit various endogenous oscillating behaviors which pr...
We present a non-trivial integration of dimension-independent
Markov Chain Monte Carlo (MCMC) is a well-established family of algorith...
The aim of this work is the comparison of the performance of the three
We present the simplicial sampler, a class of parallel MCMC methods that...
Combining posterior samples from multiple subsets
The explosion in the collection and interest in big data in recent years has brought new challenges to modern statistics. Bayesian analysis, which has benefited from the ease and generality of sampling-based inference, is now suffering as a result of the huge computational demand of posterior sampling algorithms, such as Markov chain Monte Carlo (MCMC), in large scale settings. MCMC faces several bottlenecks in big data problems due to the increasing expense in likelihood calculations, to the need for updating latent variables specific to each sampling unit, and to increasing mixing problems as the posterior becomes more concentrated in large samples.
To accelerate computation, efforts have been focused in three main directions. The first is to parallelize computation of the likelihood at each MCMC iteration (Agarwal and Duchi, 2012; Smola and Narayanamurthy, 2010). Under conditional independence, calculations can be conducted separately for mini batches of data stored on different machines, with results fed back to a central processor. This approach requires communication within each iteration, which limits overall speed. A second strategy focuses on accelerating expensive gradient calculations in Langevin and Hamiltonian Monte Carlo (Neal, 2010) using stochastic approximation based on random mini batches of data (Welling and Teh, 2011; Ahn et al., 2012). A third direction is motivated by the independent product equation (1) from Bailer-Jones and Smith (2011). The data are partitioned into mini batches, MCMC is run independently for each batch without communication, and the chains are then combined to mimic draws from the full data posterior distribution. By running independent MCMC chains, this approach bypasses communication costs until the combining step and increases MCMC mixing rate, as the subset posteriors are based on smaller sample sizes and hence effectively annealed. The main open question for this approach is how to combine the chains to obtain an accurate approximation?
instead make use of kernel density estimation to approximate the subset posterior densities, and then approximate the true posterior density following (1). Based on our experiments, these methods have adequate performance only in specialized settings, and exhibit poor performance in many other cases, such as when parameter dimensionality increases and large sample Gaussian approximations are inadequate. We propose a new combining method based on the independent product equation (1), which attempts to address these problems.
In Section 2, we first describe problems that arise in using (1) in a naive way, briefly state the motivation for our method, and provide theoretic justification for the approximation error. In Section 3 we then describe specific algorithms and discuss tuning parameter choice. Section 4 assesses the method using extensive examples. Section 5 contains a discussion, and proofs are included in the appendix.
The fundamental idea for the new method is as follows. For a parametric model, assume the data contain conditionally independent observations, which are partitioned into non-overlapping subsets, . The following relationship holds between the posterior distribution for the full data set and posterior distributions for each subset,
where is the prior distribution for the full data set and is that for subset . As we will only be obtaining draws from the subset posteriors to approximate draws from , we have flexibility in the choice of . If we further require , the above equation can be reformulated as
which we refer to as the independent product equation. This equation indicates that under the independence assumption, the posterior density of the full data can be represented by the product of subset posterior densities if the subsets together form a partition of the original set. However, despite this concise relationship, sampling from the product of densities remains a difficult issue. Scott et al. (2013)
use a convolution product to approximate this equation, resulting in an averaging method. This approximation is adequate when the subset posteriors are close to Gaussian, as is expected to hold in many parametric models for sufficiently large subset sample sizes due to the Bernstein-von Mises theorem(Van der Vaart, 1998).
Another intuitive way to apply (1) is to use kernel based non-parametric density estimation (kernel smoothing). Using kernel smoothing, one obtains a closed form approximation to each subset posterior, with these approximations multiplied together to approximate the full data posterior. This idea has been implemented recently by Neiswanger et al. (2013), but suffers from several drawbacks.
Curse of dimensionality in the number of parameters . It is well known that kernel density estimation breaks down as increases, with the sample size (number of posterior samples in this case) needing to increase exponentially in to maintain the same level of approximation accuracy.
Subset posterior disconnection. Because of the product form, the performance of the approximation to the full data posterior depends on the overlapping area of the subset posteriors, which is most likely to be the tail area of a subset posterior distribution. Hence, slight deviations in light-tailed approximations to the different subset posteriors might lead to poor approximation of the full data posterior. (See the right figure in Fig.2)
Mode misspecification. For a multimodal posterior, averaging (noticing that the component mean of the kernel smoothing method is the average of subset draws) can collapse different modes, leading to unreliable estimates.
To ameliorate these problems, we propose a different method for parallelizing MCMC. This new method, designated as the Weierstrass sampler, is motivated by the Weierstrass transform, which is related to kernel smoothing but from a different perspective. Our approach has good performance including in cases in which large sample normality of the posterior does not hold. In the rest of the article, we will use the term to denote general posterior distributions and for subset posteriors, in order to match the typical notation used with Weierstrass transform.
The key of our Weierstrass sampler lies in the Weierstrass transform,
which was initially introduced by Weierstrass (1885). The original proof shows that converges to pointwise as tends to zero. Our method approximates the subset densities via the Weierstrass transformation. We avoid inheriting the problems of the kernel smoothing method by directly modifying the targeted sampling distribution instead of estimating it from the subset posterior draws. In particular, we attempt to directly sample the approximated draws from the transformed densities instead of the original subset posterior distributions. Applying the Weierstrass transform to all subset posteriors and denoting by , the full set posterior can be approximated as
where , , and
. The above equation can be viewed as the marginal distribution of the random variable
, derived from its joint distribution with the random variableswith joint density
The original subset posteriors appear as components of this joint distribution, enabling subset-based posterior sampling. Moreover, the conditional distribution of the target random variable given the subset random variables is simply Gaussian.
The original Weierstrass transformation was stated in terms of the Gaussian kernel. For flexibility, we relax this restriction by broadening the choice of kernel functions, justifying the generalizations through Lemma 1 (See appendix). Following from the previous section, the posterior density can be approximated as
where the last term can be viewed as the marginal distribution of random variable , derived from its joint distribution with the random variables and a joint density . If this joint density is proper (illustrated in Theorem 1), one may sample from this distribution, utilizing the subset samplers as components. The details will be discussed in the next section.
This section will focus on quantifying the approximation error of (3). The results will be stated in terms of both one-dimensional and multivariate models. The detailed derivation is only provided in the one-dimensional case in the Appendix, as the multivariate derivation proceeds along identical lines.
A Hölder smooth density function (for definition see Lemma 1) is always bounded on for . Let denote the maximum value of the subset posterior densities. We have the following result (for the one-dimensional case).
If the posterior densities and the kernel functions satisfy the condition in Lemma 1 with and , i.e., the posterior density is at least second-order differentiable and the kernel function has finite variance, then the distribution defined in (
, i.e., the posterior density is at least second-order differentiable and the kernel function has finite variance, then the distribution defined in (3) is proper and there exists a positive constant such that when , the total variation distance between the posterior distribution and the approximation follows
where and are the normalizing constants, and are defined as
For multivariate distributions, the kernel variance should be substituted by the kernel covariance and we have a similar result.
(Multivariate case) If the p-variate posterior densities and the kernel functions satisfy the conditions in Lemma 2 with and , then for sufficiently small , where , the total variation distance between the posterior and the approximated density follows
where are normalizing constants, and are defined in Theorem 1.
In the error bound in Theorem 1, the constants and
do not vary much even as the sample size increases to infinity. In fact, they will converge to constants in probability (see discussions in the Appendix after the proof of Theorem1). As a result, the choice of the tuning parameters is independent of the sample size, which is a desirable property.
As the error characterized by Theorem 1 maintains a reasonable level, the approximation (3) will be effective. In this section, a refinement sampler is proposed. For convenience we will focus on the Gaussian kernel if not specified otherwise, but modifications to other kernels are straightforward. The algorithm is referred to as a Weierstrass refinement sampler, because samples from an initial rough approximation to the posterior (obtained via Laplace, variational approximations or other methods) are refined using information obtained from parallel draws from the different subset posteriors within a Weierstrass approximation. Typically, the initial rough approximations can be obtained using parallel computing algorithms; for example, Laplace and variational algorithms are parallelizable.
Equation (3) can be used to obtain a Gibbs sampler. For the Gaussian kernel, the density in (3) can be reformulated into (2), which can then be used to construct a Gibbs sampler as follows (univariate case):
This approach takes the parameters in each subset as latent variables, and updates via the average of all latent variables. The Gibbs updating is used as a refinement tool; that is, the parameters are initially drawn from a rough approximation (Laplace, variational) and then plugged into the Gibbs sampler for one step updating, known as a refinement step. Theorem 2 shows refinement leads to geometric improvement.
Assume which is an initial approximation to the true posterior . By doing one step Gibbs updating as described in (4), (5) (with general kernel ), we obtain a new draw with density . Using the notations in Theorem 1, if the kernel density function is fully supported on , then for any given , there exists a measurable set such that and,
where is a positive value depending on but independent of .
Furthermore, if the kernel function satisfies the following condition,
for given and , then the total variational distance follows
for some , which is independent of .
The proof of Theorem 2 is provided in the appendix. Though the theorem is stated for univariate distributions, the result is applicable to multivariate distributions as well. The concrete refinement sampling algorithm is described below as Algorithm 1.
There are a number of advantages of this refinement sampler. First, the method addresses the dimensionality curse (issue 1 in Section 2, which appears as the inefficiency of the Gibbs sampler (5) and (4) when is small), with the large sample approximation only used as an initial rough starting point that is then refined. We find in our experiments that we have good performance even when the true posterior is high-dimensional, multimodal and very non-Gaussian. Second, as can be seen from (5), the subset posterior densities are multiplied by a common conditional component, which brings them close to each other and limits the problem with subset posterior disconnection (issue 2 of Section 2). In addition, step 7-10 can be fully parallelized, as each draw is an independent operation. There may be advantages of an iterative version of the algorithm, which runs more than one step of the Gibbs update on each of the initial draws (repeating Algorithm 1 several times).
The choice of tuning parameters and the relationship with number of subsets is an important issue. The parameter on the one hand controls the approximation error for each subset posterior. Apparently, has to be reduced as the number of subsets increases. On the other hand, also determines the efficiency of the Gibbs sampler (how fast can the Gibbs sampler evolve the initial approximation towards the true posterior), thus, might need to be chosen adequately large for efficient refinement. Such an argument leads to the conclusion of changing during the refinement sampling process if the refinement will be repeated multiple times.
According to Fukunaga’s (Fukunaga, 1972) approach in choosing the bandwidth, if we attempt to apply kernel density estimation directly to the posterior distribution obtained from the full data set, the optimal choice will be
where is the number of parameters, is the total number of posterior samples and is the sample covariance of the posterior distribution (to be approximated by the inverse of the Hessian matrix at mode). Based on this result, the starting value for could be chosen as , of which the magnitude is comparable to the covariance of each subset posterior distribution, admitting efficient refinement in the beginning. As the refinement proceeds, the ending point of should be around indicating an accurate approximation. The tuning parameters in the middle of the process should be chosen in between. For example, for a 10-step refinement procedure, one could use for the first three steps, for the next five steps and for the last two steps.
Algorithm 1 requires an initial approximation to the target distribution. To avoid this initialization, we propose an alternative self-adaptive algorithm, which is designated as Weierstrass rejection sampler. Because of the self-adapting feature, the algorithm suffers from certain drawbacks, which will be discussed along with possible solutions in the latter part of this section. We begin with a description of the algorithm.
The formula shown in (2) immediately evokes a rejection sampler for sampling from the joint distribution: assuming . Since , we can accept a draw of with probability . Such an approach makes use of the average of the draws from the subsets to generate further posterior draws, which is similar to the kernel smoothing method proposed by Neiswanger et al. (2013). However, with slight modification, the Weierstrass rejection sampler can use the subset posterior draws as approximated samples directly without averaging, and thus avoid the mode misspecification issue. The result is stated below (for univariate case).
If the subset posterior densities satisfy all the conditions stated in Theorem 1 with and posterior draws , and the second order kernel function (which is a density function) satisfies that for some positive constant , then for any given , the rejection sampler accepts with probability . Referring to the density of the accepted draws by , the total variation distance of the approximation error follows
where . The constants are defined in Theorem 1.
The following corollary is the multivariate version.
(Multivariate case) If the p-variate posterior densities satisfy the conditions in Corollary 3.1 with and posterior draws , and the second order kernels are bounded by positive constant , then for any given , the rejection sampler accepts if where . Referring to the density of the accepted draws by , the approximation error follows,
where . The constants are defined in Theorem 1.
With the above theorem, it is easy to construct a rejection sampler as follows: for each iteration we randomly select one draw from the pool (or according to some reasonable weights), and perform the rejection sampling according to Theorem 3. We repeat the procedure for all iterations and then gather the accepted draws. The reason that the rejection operation is only conducted on one draw within the same iteration is to avoid incorporating extra undesirable correlation between the accepted draws.
The effectiveness of a rejection sampler is determined by the acceptance rate. For a -variate model and subsets, the acceptance rate for the Weierstrass rejection sampler can be calculated as (assuming all s are equal to ),
for adequately small . Clearly, the acceptance rate suffers from the curse of dimensionality in both and , so the number of posterior samples has to increase exponentially with and . This is the same problem as with the kernel smoothing method discussed before. To ameliorate the dimensionality curse, we provide the following solution.
The number of subsets is easier to tackle. It is straightforward to bring down to by using a pairwise combining strategy: we first combine the subsets into a pairwise manner to obtain posterior draws on subsets. This process is repeated to obtain subsets and so on until obtaining the complete data set. The whole procedure takes about steps, and thus brings the power from down to .
The curse of dimensionality in the number of parameter is less straightforward to address. If the -variate posterior distribution satisfies that the parameters are all independent, then
where and are the posterior marginal densities for the parameter on the full set and on the subset, respectively. The equation (9) indicates that the posterior marginal distribution satisfies the independent product equation as well. Therefore, one can obtain the joint posterior distribution from the marginal posterior distributions, which are obtained by combining the subset marginal posterior distributions via Weierstrass rejection sampling. This approach thus avoids the dimensionality issue (since in this case is always equal to 1).
In general, the posterior marginal distribution does not satisfy the equation (9) because,
The first term in the above formula is exactly the independent product equation, while the second term is due to parameter dependence. However, inspired by this marginal combining procedure, we propose a (parameter-wise) sequential rejection sampling scheme that decomposes the whole sampling procedure into steps, where each step is a one-dimensional conditional combining. The intuition is as follows: We first sample from its subset marginal distribution , and combine the draws via the Weierstrass rejection sampler to obtain . Next, we plug in into each subset likelihood to sample from its subset conditional distribution and then combine them to obtain the , where is the normalizing constant (Notice that depends on the value of ). We continue the procedure until , obtaining one posterior draw that follows
The numerator of (10) is exactly the target formula (1), while the denominator serves as the importance weights. The advantage of this new scheme is that it eliminates the dimensionality curse while keeping the number of required sequential steps low ( steps). Moreover, the importance weights can be calculated easily and accurately as
Notice that the integrated functions are all one-dimensional. Thus, an estimated (kernel-based) density , combined with the numerical integration technique, is adequate to provide an accurate evaluation for . An alternative approach for estimating is as follows,
where can be obtained from kernel density estimation on the combined draws of (which requires to sample more than one at each iteration). can be obtained similarly as before from the subset draws. The whole scheme is described in Algorithm 2.
Algorithm 2 only produces one simulated posterior draw. Therefore, in order to obtain a certain number of posterior draws, the algorithm needs to be executed in parallel on multiple machines. For example, if one aims to acquire posterior draws, then parallel machines can be used, with each machine able to run sub-threads to fulfill the whole procedure. It is worth noting that this new scheme is also applicable to the kernel smoothing method proposed by Neiswanger et al. (2013) for overcoming the dimensionality curse: just substituting the step (11) in Algorithm 2 with the corresponding kernel method.
The new algorithm still involves a sequential updating structure, but the number of steps is bounded by the number of parameters , which is different from the usual MCMC updating. A brief interpretation for the effectiveness of the new algorithm is that it changes how error is accumulated. The original -dimension function with a bandwidth will accommodate the error in a manner as , while now with the decomposed p steps, the error is accumulated linearly as which reduces the dimensionality curse.
In this section, we will illustrate the performance of Weierstrass samplers in various setups and compare them to other partition based methods, such as subset averaging and kernel smoothing. More specifically, we will compare the performance of the following methods.
Single chain MCMC: running a single Markov chain Monte Carlo on the total data set.
Simple averaging: running independent MCMC on each subset, and directly averaging all subset posterior draws within the same iteration.
Inverse variance weighted averaging: running MCMC on all subsets, and carrying out a weighted average for all subset posterior draws within the same iteration. The weight follows
where is the posterior variance for the subset .
Weierstrass sampler: The detailed algorithms are provided in previous section. For the Weierstrass rejection sampler, we do not specify the value of the tuning parameter , but instead, we specify the acceptance rate. (i.e., we first determine the acceptance rate and then calculate the corresponding ).
Non-parametric density estimation (kernel smoothing): Using kernel smoothing to approximate the subset posteriors and obtain the product thereafter. Because this method is very sensitive to the choice of the bandwidth and the covariance of the kernel function when the dimension of the model is relatively high, in most cases, only the procedure of marginal subset densities combining will be considered in this section.
Laplacian approximation: A Gaussian distribution with the posterior mode as the mean and the inverse of the Hessian matrix evaluated at the mode as the variance.
The models adopted in this section include logistic regression, binomial model and Gaussian mixture model. The performance of the seven methods will be evaluated in terms of approximation accuracy, computational efficiency, and some other special measures that will be specified later. We made use of the R package “BayesLogit”(Polson et al., 2013) to fit the logistic model and wrote our own JAGS code for the Gaussian mixture model.
We assess the performance of the refinement sampler in this section. The first part will be an evaluation of the refinement property as claimed in Theorem 2 and in the second part we will compare the performance for various methods within the logistic regression framework.
We evaluate the refinement property under both a bi-modal posterior distribution and the real data. For the bi-modal posterior distribution, the two subset posterior densities are
Then according to (1), the posterior on full data set will be roughly (omitting a tiny portion of probability),
The initial approximation adopted for the refinement sampling is a normal approximation to which has the same mean and variance. We trace the change of the refined densities for different numbers of iterations and illustrated them in Fig 1 (left). Because the initial approximation is very different from the true target, the tuning parameter will start at a large value and then decline at a certain rate, in particular, . As shown in Fig 1 (left), the approximation has evolved to a bi-modal shape after the first iteration and it appears that 10 steps are adequate to obtain a reasonably good result.
(right) shows part of the refinement results for the real data set (which will be described in more detail in the real data section), in which a logit model was fitted on the 200,000 data set with a partition into 20 subsets. We set the initial approximation apart from the posterior mode and monitor the evolution of the refinement sampler (the tuning parameters are chosen according to (7), and is chosen to be the inverse of the Hessian matrix at the mode). The refinement sampler quickly moves from the initial approximation to the truth in just 10 steps. (No posterior distribution from a full MCMC was plotted, as the sample size is too large to run a full MCMC)
We adopt the logistic model for assessing the approximation performance. The logistic regression model is broadly adopted in many scientific fields for modeling categorical data and conducting classification,
where is the corresponding link function and is the intercept. The predictors
follow a multivariate normal distribution, and the covariance matrix follows
where will be assigned two different values (0 and 0.3) to manipulate two different correlation levels (independent to correlated).
In this study, the model contains predictors and or observations, both of which will be partitioned into subsets. The coefficients follow
where is a Bernoulli random variable with and . The reason to specify the coefficients in this way is to demonstrate the performance of all methods in different situations (both easy and challenging). For logistic regression, the closer the coefficient is to zero, the larger the effective sample size and the better the performance of a Gaussian approximation. See Fig 2.
Because kernel density estimation is very sensitive to the choice of the kernel covariance when the dimension of the model is moderately high, we only demonstrate the performance of kernel smoothing for marginally combined posterior distributions (9) in this section. The Weierstrass rejection sampler is carried out in a similar way. These two methods will be compared more formally in the next section.
50 synthetic data sets were simulated for each pair of and . For posterior inference, we drew 20,000 samples (thinning to 2,000) after 50,000 burn-in for single MCMC chain and each subset MCMC. For Weierstrass refinement sampling, the Laplacian approximation is adopted as initial approximation, and the kernel variance is chosen according to (7). We conduct 10 steps refinement to obtain 2,000 refined draws (within each refinement step, we run 100 MCMC iterations on each subset to obtain an updated draw). The results are summarized in Fig 3, 4, 5, 6, and Table 1.
The posterior distribution of two selected parameters (including both zero and nonzero) for different ’s and ’s are illustrated in the figures. The nonzero parameter was plotted in two different scales in order to incorporate multiple densities in one plot. The numerical comparisons include the difference of the marginal distribution of each parameter, the difference of the joint posterior and the estimation error of the parameters. We evaluate the difference of the marginal distribution by the average total variation difference (upper bounded by 1) between the approximated marginal densities and the true posterior densities. The result will be separately demonstrated for nonzero and zero coefficients, and denoted by (nonzero) and
(zero) respectively. Evaluating the difference between joint distributions is difficult for multivariate distributions, as one needs to accurately estimate a distance between a true joint distribution and a set of samples from an approximation. Therefore, we adopted the approximated Kullback-Leibler divergence between two densities (approximating two densities by Gaussian) for reference,
where and are the sample mean and sample variance of the true posterior , while and are those for . Finally, the error of the parameter estimation will be demonstrated as the ratio between the estimation error of the approximating method and that of the true posterior mean, which is . The average results are shown in Table.1.
|W. Refi.||Laplacian||Simple Ave.||Weighted Ave.||Kernel||W.Rej|
In this section, we will evaluate the performance for the Weierstrass rejection sampler. Theorem 3 indicates that the rejection sampler may enjoy an advantage of posterior mode searching, as this sampler will make use of draws from subsets directly, instead of any form of averaging (averaging might mess up modes). This property will be investigated in the first part via the mixture model. For the second part, we compare the approximation accuracy of the rejection sampler and other methods through a simple binomial model.
Finite mixture model are widely used for clustering and approximation, but face well known challenges due to non-identifiability and multi-modality. The Gibbs sampler for normal mixture models, as pointed out in Jasra et al. (2005), suffers from the inefficiency of mode exploration, which has motivated methods such as split-merge and parallel tempering (Earl and Deem, 2005; Jasra et al., 2005; Neiswanger et al., 2013). In this section, we implement the problematic Gibbs sampler for normal mixture model without label switching moves, as a mimic to more general situations in which we do not have a specific solution for handling multi-modes (label switching is a specific method for dealing with mixture models). Then we parallelize this Gibbs sampler via different subset-based methods, and examine the abilities in posterior mode exploration.
The mixture distribution follows,
We simulate 10,000 data points from this model, divided into 10 subsets, which will be analyzed via single chain MCMC as well as parallelized algorithms. Since the model is multidimensional, we drew 2000 samples via the sequential rejection sampling described in Algorithm 2. For other samplers, we obtained 20,000 posterior draws after 20,000 iterations burn-in on each subset. The results are plotted in Fig 7.
It can be seen from Fig 7 that the Weierstrass (sequential) rejection sampling correctly recognized the posterior modes. There is one false mode, but is also picked by the single chain MCMC.
In this section, the Beta-Bernoulli model is employed for testing the performance of the Weierstrass rejection sampler. The dimension of this model is low and the true posterior distribution is available analytically, which makes the model appropriate for comparing rejection sampling and kernel smoothing method. The parameter will be assigned two values in this section: that corresponds to a common scenario and which corresponds to the rare event case. We simulated 10,000 samples which were then partitioned into 20 subsets. To obtain a conjugate posterior, we assign a beta prior and draw posterior samples for further analysis. The posterior densities for different values of are shown in Fig 8.
It can be seen from Fig 8 that when the data set contains moderate amount of information (for ), all methods work fine. However, for the inadequately informed case (), the kernel smoothing and Weierstrass rejection sampling are the only methods that perform appropriately. To match the scale of the posterior density, the rejection sampler adopts an acceptance rate of . For kernel smoothing, it requires the bandwidth to be chosen as to achieve the same level of accuracy (See the result for larger in Fig.8).
Computational efficiency is a primary motivation for parallelization. Reductions in sample size lead to reduction in computational time in most cases. We demonstrate the effects of the total sample size and the number of parameters (or number of mixture components) for logistic regression and the mixture model in Fig 9 and Fig 10. The subset number is fixed to 20. For each subset and the total set we drew 10,000 samples after 10,000 iterations burn-in. For Weierstrass refinement sampler, we repeat the procedure 10 times with 100 iterations within each step.
The real data set used in this section contains weighted census data extracted from the 1994 and 1995 current population surveys conducted by the U.S. Census Bureau (Bache and Lichman, 2013)
. The purpose is to predict whether the individual’s annual gross income will be higher than 50,000 USD (i.e., whether 50,000+ or 50,000-). The whole set contains around 200,000 observations, and 40 attributes which were turned into 176 predictors due to the re-coding of the categorical variables. Because the sample size is too large for fitting a logistic model via usual MCMC softwares such as JAGS or Rstan, we only illustrate the posterior inference for parallelized algorithms and the Laplacian approximation. In addition, because of the high dimension of the model and the sensitivity of kernel smoothing method, we only consider the marginal kernel combining algorithm and marginal Weierstrass rejection sampler. The latter will not be listed in the results as the performance is very similar to marginal kernel method.
For posterior inference, we partition the data into 20 subsets, and drew 20,000 samples on each subset after 50,000 burn-in. For Weierstrass refinement sampler, the initial distribution is chosen to be slightly away from Laplacian approximation (See Fig 1, right figure) to avoid the potential unfair advantage. We conduct 10 step refinement on 2,000 initial draws, with 50 iterations within each step for proposing refined draws. The tuning parameter is chosen according to (7). The Test Set 1 consists of 5,000 positive (50,000+) and 5,000 negative (50,000-) cases and the Test Set 2 contains 5,00 positive and 5,000 negative cases. Since the positive cases are rare (8%) in the training set, we should expect different behaviors on the two different sets. For prediction assessment, the logistic model will predict positive if is greater than 0.5, and vice-versa. The posterior distribution of selected parameters for different methods were plotted in Fig 11, and the prediction results for different categories and the two test data sets are listed in Table 2.
|Correctness on||50,000- (%)||50,000+ (%)||Test Set 1 (%)||Test Set 2 (%)|
Because the positive category (annual income greater than 50,000 USD) is rare in the training set, Laplacian approximation is likely to overfit and the posterior might not be approximately Gaussian, leading to low accuracy in predicting positiveness for most methods. On the contrary, it can be seen that all methods perform well on the Test Set 2 (which mimics the ratio of the training set).
In this article, we proposed a new, flexible and efficient Weierstrass sampler for parallelizing MCMC. The Weierstrass sampler contains two different algorithms, which are carefully designed for different situations. Extensive numerical evidence shows that, compared to other methods in the same direction, Weierstrass sampler enjoys better performance in terms of approximation accuracy, chain mixing rate and a potentially faster speed. Faced with the same difficult issues, such as the dimensionality curse, Weierstrass sampler attempts to seize the balance in trading off between the accuracy and computation efficiency. As illustrated in the numerical study, the rejection sampler can not only well approximate the original MCMC, but also improve its performance in the posterior modes exploration. In the simulation, the sampler correctly identifies all the mixture components, removing problems of the original Gibbs sampler.
Future works of Weierstrass samplers may lie in the following aspects. First, investigating the asymptotic justification for the marginal combining strategy, which could help eliminate the dimensionality concern for both kernel density estimation method and Weierstrass rejection sampling. Second, investigating the potential application in parallel tempering. In parallel tempering, there is a temperature parameter which controls both the approximation accuracy and the chain exploration ability. Here, with the tuning parameter , one is able to achieve the same thing: small entails a high accuracy, while large ensures a better exploration ability. Therefore, one could design a set of different values of for the sampling procedure, providing a ‘parallel’ way of doing parallel tempering, which may potentially improve the performance of the original method.
Assume a real-valued function is Hölder differentiable with constant , i.e., for , the -th derivative of follows,
for some positive constant . Let be a -th order kernel function satisfying that , for , and . Defining the Weierstrass transform as
where , , and .
The proof relies on the higher-order Taylor expansion on the function. If , we have
where and lies between and . Because , we thus have
and because ,
The case follows the same argument. Just notice that the Taylor expansion now becomes,
which entails that
where , and thus completes the proof. ∎
In this article, we only focus on the case when is chosen to be a density function (second order kernel function), and thus . The result stated in Lemma 1 can be naturally generalized to the multivariate case.