Minimax Optimal Bayesian Aggregation

03/06/2014 ∙ by Yun Yang, et al. ∙ 0

It is generally believed that ensemble approaches, which combine multiple algorithms or models, can outperform any single algorithm at machine learning tasks, such as prediction. In this paper, we propose Bayesian convex and linear aggregation approaches motivated by regression applications. We show that the proposed approach is minimax optimal when the true data-generating model is a convex or linear combination of models in the list. Moreover, the method can adapt to sparsity structure in which certain models should receive zero weights, and the method is tuning parameter free unlike competitors. More generally, under an M-open view when the truth falls outside the space of all convex/linear combinations, our theory suggests that the posterior measure tends to concentrate on the best approximation of the truth at the minimax rate. We illustrate the method through simulation studies and several applications.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In many applications, it is not at all clear how to pick one most suitable method out of a list of possible models or learning algorithms

. Each model/algorithm has its own set of implicit or explicit assumptions under which that approach will obtain at or near optimal performance. However, in practice verifying which if any of these assumptions hold for a real application is problematic. Hence, it is of substantial practical importance to have an aggregating mechanism that can automatically combine the estimators

obtained from the different approaches , with the aggregated estimator potentially better than any single one.

Towards this goal, three main aggregation strategies receive most attention in the literature: model selection aggregation (MSA), convex aggregation (CA) and linear aggregation (LA), as first stated by Nemirovski (2000). MSA aims at selecting the optimal single estimator from the list; CA considers searching for the optimal convex combination of the estimators; and LA focuses on selecting the optimal linear combination. Although there is an extensive literature (Juditsky and Nemirovski, 2000; Tsybakov, 2003; Wegkamp, 2003; Yang, 2000, 2001, 2004; Bunea and Nobel, 2008; Bunea and Tsybakov, 2007; Guedj and Alquier, 2013; van der Laan et al., 2007) on aggregation, there has been limited consideration of Bayesian approaches.

In this paper, we study Bayesian aggregation procedures and their performance in regression. Consider the regression model

(1.1)

where

is the response variable,

is an unknown regression function, is the feature space, ’s are the fixed- or random-designed elements in and the errors are iid Gaussian.

Aggregation procedures typically start with randomly dividing the sample into a training set for constructing estimators , and a learning set for constructing . Our primary interest is in the aggregation step, so we adopt the convention (Bunea and Tsybakov, 2007) of fixing the training set and treating the estimators as fixed functions . Our results can also be translated to the context where the fixed functions are considered as a functional basis (Juditsky and Nemirovski, 2000), either orthonormal or overcomplete, or as “weak learners” (van der Laan et al., 2007)

. For example, high-dimensional linear regression is a special case of LA where

maps an

-dimensional vector into its

th component.

Bayesian model averaging (BMA) (Hoeting et al., 1999)

provides an approach for aggregation, placing a prior over the ensemble and then updating using available data to obtain posterior model probabilities. For BMA,

can be constructed as a convex combination of estimates obtained under each model, with weights corresponding to the posterior model probabilities. If the true data generating model is one of the models in the pre-specified list (“-closed” view), then as the sample size increases the weight on will typically converge to one. With a uniform prior over in the regression setting with Gaussian noise, coincides with the exponentially weighted aggregates (Tsybakov, 2003). However, BMA relies on the assumption that contains the true model. If this assumption is violated (“-open”), then tends to converge to the single model in that is closest to the true model in Kullback-Leibler (KL) divergence. For example, when is a weighted average of and , under our regression setting will converge to that minimizes under fixed design or under random design where . Henceforth, we use the notation to denote or depending on the context.

In this paper, we primarily focus on Bayesian procedures for CA and LA. Let

be the space of all aggregated estimators for with index set . For CA, takes the form of and for LA, , where can be unknown but is finite. In addition, for both CA and LA we consider sparse aggregation with , where an extra sparsity structure is imposed on the weight . Here, for a vector , we use to denotes its -norm for . In particular, is the number of nonzero components of . The sparsity level is allowed to be unknown and expected to be learned from data. In the sequel, we use the notation to denote the best -approximation of in . Note that if , then .

One primary contribution of this work is to propose a new class of priors, called Dirichlet aggregation (DA) priors, for Bayesian aggregation. Bayesian approaches with DA priors are shown to lead to the minimax optimal posterior convergence rate over for CA and LA, respectively. More interestingly, DA is able to achieve the minimax rate of sparse aggregation (see Section 1.1), which improves the minimax rate of aggregation by utilizing the extra sparsity structure on . This suggests that DA is able to automatically adapt to the unknown sparsity structure when it exists but also has optimal performance in the absence of sparsity. Such sparsity adaptive properties have also been observed in Bunea and Tsybakov (2007) for penalized optimization methods. However, in order to achieve minimax optimality, the penalty term, which depends on either the true sparsity level or a function of , needs to be tuned properly. In contrast, the DA does not require any prior knowledge on and is tuning free.

Secondly, we also consider an “M-open” view for CA and LA, where the truth can not only fall outside the list , but also outside the space of all convex/linear combinations of the models in . Under the “M-open” view, our theory suggests that the posterior measure tends to put all its mass into a ball around the best approximation of with a radius proportional to the minimax rate. The metric that defines that ball will be made clear later. This is practically important because the true model in reality is seldom correctly specified and a convergence to is the best one can hope for. Bayesian asymptotic theory for misspecified models is under developed, with most existing results assuming that the model class is either known or is an element of a known list. One key step is to construct appropriate statistical tests discriminating from other elements in . Our tests borrow some results from Kleijn and van der Vaart (2006) and rely on concentration inequalities.

The proposed prior on induces a novel shrinkage structure, which is of independent interest. There is a rich literature on theoretically optimal models based on discrete (point mass mixture) priors (Ishwaran and Rao, 2005; Castillo and van der Vaart, 2012) that are supported on a combinatorial model space, leading to heavy computational burden. However, continuous shrinkage priors avoid stochastic search variable selection algorithms (George and McCulloch, 1997) to sample from the combinatorial model space and can potentially improve computational efficiency. Furthermore, our results include a rigorous investigation on -dimensional symmetric Dirichlet distributions, Diri when and . Here Diri denotes a Dirichlet distribution with concentration parameters . In machine learning, Diri with are widely used as priors for latent class probabilities (Blei et al., 2003)

. However, little rigorous theory has been developed for the relationship between its concentration property and the hyperparameter

. Rousseau and Mengersen (2011) consider a related problem of overfitted mixture models and show that generally the posterior distribution effectively empties the extra components. However, our emphasis is to study the prediction performance instead of model selection. Moreover, in Rousseau and Mengersen (2011) the number of components is assumed to be fixed as increases, while in our setting we allow to grow in the order of . In this large- situation, the general prior considered in Rousseau and Mengersen (2011) is unable to empty the extra components and we need to impose sparsity. In this paper, we show that if we choose with , then Diri could lead to the optimal concentration rate for sparse weights (Section 2.1). Moreover, such concentration is shown to be adaptive to the sparsity level .

The rest of the paper is organized as follows. In Section 1.1, we review the minimax results for aggregation. In Section 2, we describe the new class of priors for CA and LA based on symmetric Dirichlet distributions. In Section 3, we study the asymptotic properties of the proposed Bayesian methods. In Section 4, we show some simulations and applications. The proofs of the main theorems appear in Section 5 and some technical proofs are deferred to Section 6. We provide details of the MCMC implementation of our Bayesian aggregation methods in the appendix.

1.1 A brief review of the Minimax risks for aggregation

It is known (Tsybakov, 2003) that for CA, the minimax risk for estimating the best convex combination within is

(1.2)

where and ranges over all possible estimators based on observations. Here, for any two positive sequences and , means that there exists a constant , such that and for any . The norm is understood as the -norm for random design and the -norm for fixed design. If we have more information that the truth also possesses a sparse structure , then we would expect a faster convergence rate of estimating . For example, in the “M-closed” case where for some , and . Let be the space of all -sparse convex aggregations of . By extending the results in Tsybakov (2003), it can be shown that when the sparsity level satisfies , the minimax risk of estimating an element in is given by

(1.3)

From the preceding results, serves as the sparsity/non-spasrsity boundary of the weight as there is no gain in the estimation efficiency if .

From Tsybakov (2003), the minimax risk for LA with is

As a result, general LA is only meaningful when , as . Similarly, the above minimax risk can be extended to -sparse LA for as

Note that for sparse LA, the sparsity level can be arbitrary. A simple explanation is that the constraint ensures that every element in can be approximated with error at most by some -sparse element in (see Lemma 6.1). However, if we further assume that and restrict , then by extending Tsybakov (2003), it can be shown that the minimax risks of LA of is the same as those of convex aggregation under a non-sparse structure as (1.2) and a sparse structure as (1.3).

2 Bayesian approaches for aggregation

2.1 Concentration properties of high dimensional symmetric Dirichlet distributions

Consider an -dimensional symmetric Dirichlet distribution Diri indexed by a concentration parameter , whose pdf at is given by , where is the Gamma function. -dimensional Dirichlet distributions are commonly used in Bayesian procedures as priors over the -simplex. For example, Dirichlet distributions can be used as priors for probability vectors for latent class allocation. In this subsection, we investigate the concentration properties of Diri when and . Fig. 1 displays typical patterns for -dimensional Dirichlet distributions Diri with changing from moderate to small. As can be seen, the Dirichlet distribution tends to concentrate on the boundaries for small , which is suitable for capturing sparsity structures.

(a) .
(b) .
(c) .
Figure 1: Symmetric Dirichlet distributions with different values for the concentration parameter. Each plot displays 100 independent draws from Diri.

To study the concentration of Diri, we need to characterize the space of sparse weight vectors. Since Dirichlet distributions are absolutely continuous, the probability of generating an exactly -sparse vector is zero for any . Therefore, we need to relax the definition of -sparsity. Consider the following set indexed by a tolerance level and a sparsity level : , where is the ordered sequence of . consists of all vectors that can be approximated by -sparse vectors with -error at most . The following theorem shows the concentration property of the symmetric Dirichlet distribution Diri with . This theorem is a easy consequence of Lemma 5.1 and Lemma 5.4 in Section 5.

Theorem 2.1.

Assume that Diri with and . Let be any -sparse vector in the -dimensional simplex . Then for any and some ,

(2.1)
(2.2)

The proof of (2.2) utilizes the stick-breaking representation of Dirichlet processes (Sethuraman, 1994) and the fact that Diri

can be viewed as the joint distribution of

where Dirichlet process DP with

the uniform distribution on

. The condition in Theorem 2.1 reflects the fact that the concentration parameter should decrease to as in order for DP to favor sparsity. (2.2) validates our observations in Fig. 1 and (2.1) suggests that the prior mass around every sparse vector is uniformly large since the total number of -sparse patterns (locations of nonzero components) in is of order . In fact, both (2.1) and (2.2) play crucial roles in the proofs in Section 5.1 on characterizing the posterior convergence rate for the Bayesian method below for CA (also true for more general Bayesian methods), where is a sequence satisfying and . Assume the best approximation of the truth to be -sparse. (2.2) implies that the posterior distribution of tends to put almost all its mass in and (2.1) is required for the posterior distribution to be able to concentrate around at the desired minimax rate given by (1.2).

2.2 Using Dirichlet priors for Convex Aggregation

In this subsection, we assume to be random with distribution and . Here, for a probability measure on a space , we use the notation to denote the norm associated with the square integrable function space . We assume the random design for theoretical convenience and the procedure and theory for CA can also be generalized to fixed design problems. Assume the functions also belong to . Consider combining these functions into an aggregated estimator , which tries to estimate by elements in the space of all convex combinations of . The assumption that are fixed is reasonable as long as different subsets of samples are used for producing and for aggregation. For example, we can divide the data into two parts and use the first part for estimating and the second part for aggregation.

We propose the following Dirichlet aggregation (DA) prior:

(DA)

where are two positive hyperparameters. As Theorem 2.1 and the results in Section 5 suggest, such a symmetric Dirichlet distribution is favorable since Diri with equally small parameters for has nice concentration properties under both sparse and nonsparse type conditions, leading to near minimax optimal posterior contraction rate under both scenarios.

We also mention a related paper (Bhattacharya et al., 2013) that uses Dirichlet distributions in high dimensional shrinkage priors, where they considered normal mean estimating problems. They proposed a new class of Dirichlet Laplace priors for sparse problems, with the Dirichlet placed on scaling parameters of Laplace priors for the normal means. Our prior is fundamentally different in using the Dirichlet directly for the weights , including a power for . This is natural for aggregation problems, and we show that the proposed prior is simultaneously minimax optimal under both sparse and nonsparse conditions on the weight vector as long as .

2.3 Using Dirichlet priors for Linear Aggregation

For LA, we consider a fixed design for and write (1.1) into vector form as , , where is the response vector, is the vector representing the expectation of and is the identity matrix. Let be the prediction matrix, where the th column of consists of all values of evaluated at the training predictors . LA estimates as with the the coefficient vector. Use the notation to denote the th column of and the th row. Notice that this framework of linear aggregation includes (high-dimensional) linear models as a special case where and .

Let , with , with . This new parametrization is identifiable and uniquely determines . Therefore, there exists a one-to-one correspondence between the prior on and the prior on . Under this parametrization, the geometric properties of transfer to those of . For example, a prior on that induces sparsity will produce a sparse prior for . With this in mind, we propose the following double Dirichlet Gamma (DDG) prior for or :

(DDG1)

Since follows a Dirichlet distribution, it can be equivalently represented as

Let with . By marginalizing out the , the prior for can be equivalently represented as

(2.3)

where DG

denotes the double Gamma distribution with shape parameter

, rate parameter and pdf (), where is the Gamma function. More generally, we call a distribution as the double Dirichlet distribution with parameter , denoted by DD, if it can be represented by (2.3) with DG. Then, the DDG prior for has an alternative form as

(DDG2)

We will use the form (DDG2) for studying the theoretical properties of the DDG prior and focus on the form (DDG1) for posterior computation.

3 Theoretical properties

In this section, we study the prediction efficiency of the proposed Bayesian aggregation procedures for CA and LA in terms of convergence rate of posterior prediction.

We say that a Bayesian model , with a prior distribution over the parameter space , has a posterior convergence rate at least if

(3.1)

with a limit , where is a metric on and is a sufficiently large positive constant. For example, to characterize prediction accuracy, we use and for CA and LA, respectively. Let be the truth under which the iid observations are generated. If , then the model is well-specified and under mild conditions, . If , then the limit is usually the point in so that has the minimal Kullback-Leibler (KL) divergence to . (3.1

) suggests that the posterior probability measure puts almost all its mass over a sequence of

-balls whose radii shrink towards at a rate . In the following, we make the assumption that is known, which is a standard assumption adopted in Bayesian asymptotic proofs to avoid long and tedious arguments. de Jonge and van Zanten (2013)

studies the asymptotic behavior of the error standard deviation in regression when a prior is specified for

. Their proofs can also be used to justify our setup when is unknown. In the rest of the paper, we will frequently use to denote a constant, whose meaning might change from line to line.

3.1 Posterior convergence rate of Bayesian convex aggregation

Let

be the second order moment matrix of

, where . Let be the best -approximation of in the space of all convex combinations of , i.e. . This misspecified framework also includes the well-specified situation as a special case where . Denote the th column of by .

We make the following assumptions:

(A1)

There exists a constant such that .

(A2)

(Sparsity) There exists an integer , such that .

(A3)

There exists a constant such that .

  • If for each , then

    is the variance covariance matrix. (A1) assumes the second moment

    of to be uniformly bounded. By applying Cauchy’s inequality, the off-diagonal elements of can also be uniformly bounded by the same .

  • (A3) implies (A1). This uniformly bounded condition is only used in Lemma 5.5 part a. As illustrated by Birgé (2004), such a condition is necessary for studying the loss of Gaussian regression with random design, since under this condition the Hellinger distance between two Gaussian regression models is equivalent to the distance between their mean functions.

  • Since , the norm of is always equal to one, which means that is always -summable. imposes an additional sparse structure on . We will study separately the convergence rates with and without (A2). It turns out that the additional sparse structure improves the rate if and only if .

The following theorem suggests that the posterior of concentrates on an -ball around the best approximation with a radius proportional to the minimax rate of CA. In the special case when , the theorem suggests that the proposed Bayesian procedure is minimax optimal.

Theorem 3.1.

Assume (A3). Let be iid copies of sampled from , . If is the minimizer of on , then under the prior (DA), for some , as ,

Moreover, if (A2) is also satisfied, then as ,

3.2 Posterior convergence rate of Bayesian linear aggregation

Let be the coefficient such that best approximates in norm, i.e. . Similar to the CA case, such a misspecified framework also includes the well-specified situation as a special case where . It is possible that there exists more than one such a minimizer and then we can choose with minimal nonzero components. This non-uniqueness will not affect our theorem quantifying the prediction performance of LA since any minimizers of will give the same prediction . Our choice of , which minimizes , can lead to the fastest posterior convergence rate.

We make the following assumptions:

(B1)

There exists a constant such that .

(B2a)

(Sparsity) There exists an integer , such that .

(B2b)

(-summability) There exists a constant , such that .

(B3)

For , there exists a constant such that for all with .

  • (B1) is the column normalizing condition for the design matrix. This assumption is mild since the predictors can always be normalized to satisfy it. This condition can also be considered as the empirical version of (A1), where the matrix is replaced by its empirical estimator .

  • (B2a) is a counterpart of the sparsity condition (A2) of the aggregation problem. This assumption is commonly made in the high dimensional linear regression literature. (B2b) is assumed by Bühlmann (2006) in studying consistency of boosting for high dimensional linear regression. This condition includes the sparsity condition (B2a) as a special case while also including the case in which many components of are nonzero but small in magnitude. Similar to the aggregation problem, under (B2b), the sparsity gains only when . (B2a) also implies a sparsity constraint on , where always satisfies .

  • (B3) is the same in spirit as the sparse eigenvalue condition made in

    Raskutti et al. (2011), which provides identifiability for -sparse vectors. This assumption is only made for the -summable case, where any -summable can be approximated by an -sparse vector with error at most under (Lemma 5.3 part b), with given in (DA-PC), where . Under this assumption, we show that the posterior probability of converges to zero as for some constant and therefore with high posterior probability, can be approximated by an -sparse vector with error at most .

The following theorem is a counterpart of Theorem 3.1 for LA.

Theorem 3.2.

Assume (B1). Let be an -dimensional response vector sampled from . Let be any one of the minimizers of in . If (B2b) and (B3) are true, then under the prior (DDG2), for some , as ,

If (B2a) is true, then as ,

Theorem 3.2 suggests that in order to obtain the fastest posterior convergence rate for prediction, we can choose the having the minimal among all minimizers of . This suggests that the posterior measure tends to concentrate on the sparsest that achieves the same prediction accuracy, which explains the sparse adaptivity. The non-uniqueness happens when .

4 Experiments

As suggested by Yang (2001), the estimator depends on the order of the observations and one can randomly permute the order a number of times and average the corresponding estimators. In addition, one can add a third step of estimating with the full dataset as and setting the final estimator as . We will adopt this strategy and our splitting and aggregation scheme can be summarized as follows. First, we randomly divide the entire samples into two subsets and with and . As a default, we set and . Using as a training set, we obtain base learners . Second, we apply the above MCMC algorithms to aggregate these learners into based on the aggregating samples. Finally, we use the whole dataset to train these base learners, which gives us , and the final estimator is . Therefore, one basic requirement on the base learners is that they should be stable in the sense that can not be dramatically different from (e.g. CART might not be a suitable choice for the base learner).

4.1 Bayesian linear aggregation

In this subsection, we apply the Bayesian LA methods to the linear regression , with and . Since every linear aggregation problem can be reformed as a linear regression problem, this is a simple canonical setting for testing our approach. We consider two scenarios: 1. the sparse case where the number of nonzero components in the regression coefficient is smaller than and the sample size ; 2. the non-sparse case where can have many nonzero components, but the norm remains constant as changes. We vary model dimensionality by letting , , and .

We compare the Bayesian LA methods with lasso, ridge regression and horseshoe. Lasso

(Tibshirani, 1996) is widely used for linear models, especially when is believed to be sparse. In addition, due to the use of penalty, the lasso is also minimax optimal when is -summable (Raskutti et al., 2011). Ridge regression (Hoerl and Kennard, 1970) is a well-known shrinkage estimator for non-sparse settings. Horseshoe (Carvalho et al., 2010) is a Bayesian continuous shrinkage prior for sparse regression from the family of global-local mixtures of Gaussians (Polson and Scott, 2010). Horseshoe is well-known for its robustness and excellent empirical performance for sparse regression, but there is a lack of theoretical justification. training samples are used to fit the models and testing samples are used to calculate the prediction root mean squared error (RMSE) , where denotes the prediction of .

The MCMC algorithm for the Bayesian LA method is run for 2,000 iterations, with the first 1,000 iterations as the burn-in period. We set , , and for the hyperparameters. The tuning parameters in the MH steps are chosen so that the acceptance rates are around . The lasso is implemented by the glmnet package in R, the ridge is implemented by the lm.ridge function in R and horseshoe is implemented by the monomvn package in R. The iterations for horseshoe is set as the default 1,000. The regularization parameters in Lasso and ridge are selected via cross-validation.

4.1.1 Sparse case

In the sparse case, we choose the number of non-zero coefficients to be . The simulation data are generated from the following model:

with covariates i.i.d . The training size is set to be and testing size . As a result, (S) with and can be considered as moderate dimensional, while and are relatively high dimensional.

5 20 100 500
LA .511 .513 .529 .576
(0.016) (0.016) (0.020) (0.023)
Lasso .514 .536 .574 .613
(0.017) (0.020) (0.039) (0.042)
Ridge .514 .565 1.23 2.23
(0.017) (0.019) (0.139) (0.146)
Horseshoe .512 .519 .525 .590
(0.016) (0.014) (0.019) (0.022)
Table 1: RMSE for the sparse linear model (S). The numbers in the parentheses indicate the standard deviations. All results are based on 100 replicates.

From Table 1, all the methods are comparable when there is no nuisance predictor (). However, as more nuisance predictors are included, the Bayesian LA method and horseshoe have noticeably better performance than the other two methods. For example, for , the Bayesian LA method has and improvements over lasso and ridge, respectively. In addition, as expected, ridge deteriorates more dramatically than the other two as grows. It appears that Bayesian LA is more computationally efficient than horseshoe. For example, under it takes horseshoe 50 seconds to draw 1,000 iterations but only takes LA about 1 second to draw 2,000 iterations.


Figure 2: Traceplots for a non-zero regression coefficient and a zero coefficient.
Figure 3:

95% posterior credible intervals for

in sparse regresion. The solid dots are the corresponding posterior medians.
Figure 4: 95% posterior credible intervals for in non-sparse regression. The solid dots are the corresponding posterior medians.
Figure 3: 95% posterior credible intervals for in sparse regresion. The solid dots are the corresponding posterior medians.

Fig. 2 displays the traceplots after the burn-in for a typical non-zero and a typical zero regression coefficient respectively under . The non-zero coefficient mixes pretty well according to its traceplot. Although the traceplot of the zero coefficient exhibits some small fluctuations, their magnitudes are still negligible compared to the non-zero ones. We observe that these fluctuant traceplots like Fig. 2(b) only happens for those ’s whose posterior magnitudes are extremely small. The typical orders of the posterior means of those ’s in LA that correspond to unimportant predictors range from to . However, the posterior medians of unimportant predictors are less than (see Fig. 4). This suggests that although the coefficients are not exactly to zero, the estimated regression coefficients with zero true values are still negligible compared to the estimators of the nonzero coefficients. In addition, for LA the posterior median appears to be a better and more robust estimator for sparse regression than the posterior mean.

4.1.2 Non-sparse case

In the non-sparse case, we use the following two models as the truth:

with covariates i.i.d . In (NS1), all the predictors affect the response and the impact of predictor decreases quadratically in . Moreover, satisfies the -summability since . In (NS2), half of the predictors have the same influence on the response with . The training size is set to be and testing size in the following simulations.

5 20 100 500
NS1 LA .101 .112 .116 .129
(0.002) (0.003) (0.005) (0.007)
Lasso .105 .110 .116 .155
(0.006) (0.005) (0.005) (0.006)
Ridge .102 .107 .146 2.42
(0.003) (0.004) (0.008) (0.053)
Horseshoe .102 .111 .114 .136
(0.003) (0.003) (0.004) (0.005)
NS2 LA .101 .104 .121 .326
(0.002) (0.003) (0.005) (0.008)
Lasso .111 .106 .131 .323
(0.006) (0.003) (0.007) (0.008)
Ridge .103 .107 .140 .274
(0.003) (0.003) (0.008) (0.010)
Horseshoe .102 .104 .124 .308
(0.003) (0.003) (0.004) (0.007)
Table 2: RMSE for the non-sparse linear models (NS1) and (NS2). All results are based on 100 replicates.

From Table 2, all the methods have comparable performance when is moderate (i.e or ) in both non-sparse settings. In the non-sparse settings, horseshoe also exhibits excellent prediction performance. In most cases, LA, lasso and horseshoe have similar performance. As increases to an order comparable to the sample size, LA and horseshoe tend to be more robust than lasso and ridge. As becomes much greater than , LA, lasso and horseshoe remain good in (NS1) while breaking down in (NS2); ridge breaks down in (NS1) while becoming the best in (NS2). It might be because in (NS1), although all ’s are nonzero, the first several predictors still dominate the impact on . In contrast, in (NS2), half of ’s are nonzero and equally small. Fig. 4 plots 95% posterior credible intervals for of (NS2) under . According to Section 1.1, the spasrse/non-sparse boundary for under is . Therefore, the results displayed in Fig. 4

can be classified into the non-sparse regime. A simple variable selection based on these credible intervals correctly identifies all

nonzero components.

4.1.3 Robustness against the hyperparameters

Since changing the hyperparameter in the Dirichlet prior is equivalent to changing the hyperparameter , we perform a sensitivity analysis for in the above two regression settings with .

Figure 5: Robustness of the Bayesian LA methods against the hyperparameter . The results are based on 100 replicates.

From Figure 5, the Bayesian LA method tends to be robust against the change in at a wide range. As expected, the Bayesian LA method starts to deteriorate as becomes too small. In particular, when is zero, the Dirichlet prior no longer favors sparse weights and the RMSE becomes large (especially for the sparse model) in all three settings. However, the Bayesian LA methods tend to be robust against increase in . As a result, we would recommend choosing in practice.

4.2 Bayesian convex aggregation

In this subsection, we conduct experiments for the Bayesian convex aggregation method.

4.2.1 Simulations

The following regression model is used as the truth in our simulations:

(4.1)

with covariates i.i.d . The training size is set to be and testing size in the following simulations.

In the first simulation, we choose

base learners: CART, random forest (RF), lasso, SVM, ridge regression (Ridge) and neural network (NN). The Bayesian aggregation (BA) is compared with the super learner (SL). SL is implemented by the

SuperLearner package in R. The implementations of the base learners are described in Table 3. The MCMC algorithm for the Bayesian CA method is run for 2,000 iterations, with the first 1,000 iterations as the burn-in period. We set , for the hyperparameters. The simulation results are summarized in Table 4, where square roots of mean squared errors (RMSE) of prediction based on 100 replicates are reported.

Base learner CART RF Lasso
R package rpart randomForest glmnet
SVM Ridge NN GAM
e1071