DeepAI
Log In Sign Up

On Theory for BART

10/01/2018
by   Veronika Rockova, et al.
0

Ensemble learning is a statistical paradigm built on the premise that many weak learners can perform exceptionally well when deployed collectively. The BART method of Chipman et al. (2010) is a prominent example of Bayesian ensemble learning, where each learner is a tree. Due to its impressive performance, BART has received a lot of attention from practitioners. Despite its wide popularity, however, theoretical studies of BART have begun emerging only very recently. Laying the foundations for the theoretical analysis of Bayesian forests, Rockova and van der Pas (2017) showed optimal posterior concentration under conditionally uniform tree priors. These priors deviate from the actual priors implemented in BART. Here, we study the exact BART prior and propose a simple modification so that it also enjoys optimality properties. To this end, we dive into branching process theory. We obtain tail bounds for the distribution of total progeny under heterogeneous Galton-Watson (GW) processes exploiting their connection to random walks. We conclude with a result stating the optimal rate of posterior convergence for BART.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

05/09/2018

Three tree priors and five datasets: A study of the effect of tree priors in Indo-European phylogenetics

The age of the root of the Indo-European language family has received mu...
01/16/2013

Tractable Bayesian Learning of Tree Belief Networks

In this paper we present decomposable priors, a family of priors over st...
04/25/2021

Contraction of a quasi-Bayesian model with shrinkage priors in precision matrix estimation

Currently several Bayesian approaches are available to estimate large sp...
04/29/2020

Objective priors for divergence-based robust estimation

Objective priors for outlier-robust Bayesian estimation based on diverge...
05/25/2022

Bayesian Multiscale Analysis of the Cox Model

Piecewise constant priors are routinely used in the Bayesian Cox proport...
01/03/2021

Learning optimal Bayesian prior probabilities from data

Noninformative uniform priors are staples of Bayesian inference, especia...
07/09/2019

A Bayesian Approach for Analyzing Data on the Stiefel Manifold

Directional data emerges in a wide array of applications, ranging from a...

1 Bayesian Machine Learning

Bayesian Machine Learning and Bayesian Non-parametrics share the same objective: increasing flexibility necessary to address very complex problems using a Bayesian approach with minimal subjective input. While the two fields can be, to some extent, regarded as synonymous, their emphasis is quite different. Bayesian non-parametrics has evolved into a largely theoretical field, studying frequentist properties of posterior objects in inifinite-dimensional parameter spaces. Bayesian machine learning, on the other hand, has been primarily concerned with developing scalable tools for computing such posterior objects. In this work, we bridge these two fields by providing theoretical insights into one of the workhorses of Bayesian machine learning, the BART method.

Bayesian Additive Regression Trees (BART) are one of the more widely used Bayesian prediction tools and their popularity continues to grow. Compared to its competitors (e.g. Gaussian processes, random forests or neural networks) BART requires considerably less tuning, while maintaining robust and relatively scalable performance (

BART R package of McCulloch (2017), bartMachine R package of Bleich et al. (2014), top down particle filtering of Lakshminarayanan et al. (2013)). BART has been successfully deployed in many prediction tasks, often outperforming its competitors (see predictive comparisons on data sets in Chipman et al. (2010)

). More recently, its flexibility and stellar prediction has been capitalized on in causal inference tasks for heterogeneous/average treatment effect estimation (

Hill (2011), Hahn et al. (2017) and references therein). BART has also served as a springboard for various incarnations and extensions including: Monotone BART (Chipman et al. (2016)

), Heteroscedastic BART (

Pratola et al. (2017)), treed Gaussian processes (Gramacy and Lee (2008)) and dynamic trees (Taddy et al. (2011)), to list a few. Related non-parametric constructions based on recursive partitioning have proliferated in the Bayesian machine learning community for modeling relational data (Mondrian process of Roy and Teh (2008), Mondian forests (Lakshminarayanan et al. (2014)). In short, BART continues to have a decided impact on the field of Bayesian non-parametrics/machine learning.

Despite its widespread popularity, however, the theory has not caught up with its applications. First theoretical results were obtained only very recently. As a precursor to these developments, Coram and Lalley (2006) obtained a consistency result for Bayesian histograms in binary regression with a single predictor. van der Pas and Rockova (2017) provided a posterior concentration result for Bayesian regression histograms in Gaussian non-parametric regression, also with one predictor. Rockova and van der Pas (2017) (further referred to as RP17) then extended their study to trees and forests in a high-dimensional setup where and where variable selection uncertainty is present. They obtained the first theoretical results for Bayesian CART, showing optimal posterior concentration (up to a log factor) around a -Hölder continuous regression function (with a smoothness ). Going further, they also show optimal performance for Bayesian forests, both in additive and non-additive regression. Linero and Yang (2017) obtained similar results for Bayesian ensembles, but for fractional posteriors (raised to a power). The proof of RP17, on the other hand, relies on a careful construction of sieves and applies to regular posteriors. In addition, Linero and Yang (2017) do not study step functions (the essence of Bayesian CART and BART) but aggregated smooth kernels, allowing for . Building on RP17, Liu et al. (2018) obtained model selection consistency results (for variable and regularity selection) for Bayesian forests.

Albeit related, the tree priors studied in RP17 are not the actual priors deployed in BART. Here, we develop new tools for the analysis of the actual BART prior and obtain parallel results to those in RP17. To begin, we dive into branching process theory to characterize aspects of the distribution on total progeny under heterogeneous Galton-Watson processes. Revisiting several useful facts about Galton-Watson processes, including their connection to random walks, we derive a new prior tail bound for the tree size under the BART prior. With our proving strategy, the actual prior of Chipman et al. (2010) does not

appear to penalize large trees aggressively enough. We suggest a very simple modification of the prior by altering the splitting probability. With this minor change, the prior is shown to induce the right amount of regularization and optimal speed of posterior convergence.

The paper is structured as follows. Section 2 revisits trees and forests in the context of non-parametric regression and discusses the BART prior. Section 3 reviews the notion of posterior concentration. Section 4 discusses Galton Watson processes and their connection to Bayesian CART. Section 5 is concerned with tail bounds on total progeny. Section 6 and 7 describe prior and concentration properties of BART. Section 7 wraps up with a discussion.

2 The Appeal of Trees/Forests

The data setup under consideration consists of , a set of low dimensional outputs, and , a set of high dimensional inputs for . Our statistical framework is non-parametric regression, which characterizes the input-output relationship through

where is an unknown regression function. A regression tree can be used to reconstruct via a mapping so that for . Each such mapping is essentially a step function

(1)

underpinned by a tree-shaped partition

and a vector of step heights

. The vector represents quantitative guesses of the average outcome inside each cell. Each partition consists of rectangles obtained by recursively applying a splitting rule (an axis-parallel bisection of the predictor space). We focus on binary tree partitions, where each internal node (box) is split into two children (formal definition below).

Definition 2.1.

(A Binary Tree Partition) A binary tree partition consists of rectangular cells obtained with successive recursive binary splits of the form vs for some , where the splitting value is chosen from observed values .

Partitioning is intended to increase within-node homogeneity of outcomes. In the traditional CART method (Breiman et al. (1984)

), the tree is obtained by “greedy growing” (i.e. sequential optimization of some impurity criterion) until homogeneity cannot be substantially improved. The tree growing process is often followed by “optimal pruning” to increase generalizability. Prediction is then determined by terminal nodes of the pruned tree and takes the form either of a class level in classification problems, or the average of the response variable in least squares regression problems (

Breiman et al. (1984)).

In tree ensemble learning, each constituent is designed to be a weak learner, addressing a slightly different aspect of the prediction problem. These trees are intended to be shallow and are woven into a forest mapping

(2)

where each is of the form (1), is an ensemble of trees and is a collection of jump sizes for the trees. Random forests obtain each tree learner from a bootstrapped version of the data. Here, we consider a Bayesian variant, the BART method of Chipman et al. (2010), which relies on the posterior distribution over to reconstruct the unknown regression function .

2.1 Bayesian Trees and Forests

Bayesian CART was introduced as a Bayesian alternative to CART, where regularization/stabilization is obtained with a prior rather than with pruning (Chipman et al. (1998), Denison et al. (1998)). The prior distribution is assigned over a class of step functions

in a hierarchical manner.

The BART prior by Chipman et al. (2010) assumes that the number of trees is fixed. The authors recommend a default choice which was seen to provide good results. Next, the tree components are a-priori independent of each other in the sense that

(3)

where

is the prior probability of a partition

and is the prior distribution over the jump sizes.

2.1.1 Prior on Partitions

In BART and Bayesian CART of Chipman et al. (1998), the prior over trees is specified implicitly as a tree generating stochastic process, described as follows:

  1. Start with a single leave (a root node) .

  2. Split a terminal node, say , with a probability

    (4)

    for some and , where is the depth of the node in the tree architecture.

  3. If the node splits, assign a splitting rule and create left and right children nodes. The splitting rule consists of picking a split variable uniformly from available directions and picking a split point uniformly from available data values . Non-uniform priors can also be used to favor splitting values that are thought to be more important. For example, splitting values can be given more weight towards the center and less weight towards the edges.

2.1.2 Prior on Step Heights

Given a tree partition with steps, we consider iid Gaussian jumps

where is a Gaussian density with mean

and variance

. Chipman et al. (2010) recommend first shifting and rescaling ’s so that the observed transformed values range from -0.5 to 0.5. Then they assign a conjugate normal prior , where for some suitable value of . This is to ensure that the prior assigns substantial probability to the range of the ’s.

The BART prior also involves an inverse chi-squared distribution on residual variance, with hyper-parameters chosen so that the

quantile of the prior is located at some sample based variance estimate. While the case of random variance can be incorporated in our analysis (de Jonge and van Zanten (2013)), we will for simplicity assume that the residual variance is fixed.

Existing theoretical work for Bayesian forests (RP17) is available for a different prior on tree partitions . Their analysis assumes a hierarchical prior consisting of (a) a prior on the size of a tree and (b) a uniform prior over trees of size . This prior is equalitarian in the sense that trees with the same number of leaves are a-priori equally likely regardless of their topology. RP17 also imposed a diversification restriction in their prior, focusing on -valid ensembles (Definition 5.3) which consist of trees that do not overlap too much. The prior on the number of leaves is a very important ingredient for regularization. We will study aspects of its distribution under the actual BART prior in later sections.

3 Bayesian Non-parametrics Lense

One way of assessing the quality of a Bayesian procedure is by studying the learning rate of its posterior, i.e. the speed at which the posterior distribution shrinks around the truth as . These statements are ultimately framed in a frequentist way, describing the typical behavior of the posterior under the true generative model . Posterior concentration rate results have been valuable for the proposal and calibration of priors. In infinite-dimensional parameter spaces, such as the one considered here, seemingly innocuous priors can lead to inconsistencies (Cox (1993), Diaconis and Freedman (1986)) and far more care has to be exercised to come up with well-behaved priors.

The Bayesian approach requires placing a prior measure on , the set of qualitative guesses of . Given observed data , inference about is then carried out via the posterior distribution

where is a -field on and where is the likelihood function for the output under .

In Bayesian non-parametrics, one of the usual goals is determining how fast the posterior probability measure concentrates around as . This speed can be assessed by inspecting the size of the smallest -neighborhoods around

that contain most of the posterior probability (

Ghosal and van Der Vaart (2007)), where . For a diameter and some , we denote with

the -neighborhood centered around . We say that the posterior distribution concentrates at speed such that when

(5)

for any . Posterior consistency statements are a bit weaker, where in (5) is replaced with a fixed neighborhood . We will position our results using , the near-minimax rate for estimating a -dimensional -smooth function. We will also assume that is Hölder continuous, i.e. -Hölder smooth with . The limitation is an unavoidable consequence of using step functions to approximate smooth and can be avoided with smooth kernel methods (Linero and Yang (2017)).

The statement (5) can be proved by verifying the following three conditions (suitably adapted from Theorem 4 of Ghosal and van Der Vaart (2007)):

(6)
(7)
(8)

for some . In (6), is the -covering number of a set for a semimetric , i.e. the minimal number of -balls of radius needed to cover a set . A few remarks are in place. The condition (8) ensures that the prior zooms in on smaller, and thus more manageable, sets of models by assigning only a small probability outside these sets. The condition (6) is known as “the entropy condition” and controls the combinatorial richness of the approximating sets . Finally, condition (7) requires that the prior charges an neighborhood of the true function. The results of type (5) quantify not only the typical distance between a point estimator (posterior mean/median) and the truth, but also the typical spread of the posterior around the truth. These results are typically the first step towards further uncertainty quantification statements.

4 The Galton-Watson Process Prior

The Galton-Watson (GW) process provides a mathematical representation of an evolving population of individuals who reproduce and die subject to laws of chance. Binary tree partitions under the prior (4) can be thought of as realizations of such a branching process. Below, we review some terminology of branching processes and link them to Bayesian CART.

We denote with the population size at time (i.e. the number of nodes in the layer of the tree). The process starts at time with a single individual, i.e. . At time , each member is split independently of one another into a random number of offsprings. Let denote the number of offsprings produced by the individual of the generation and let be the associated probability generating function. A binary tree is obtained when each node has either zero or two offsprings, as characterized by

(9)

Homogeneous GW process is obtained when all ’s are iid. A heterogeneous GW process is a generalization where the offspring distribution is allowed to vary according to the generations, i.e. the variables are independent but non-identical. The Bayesian CART prior of Chipman et al. (1998) can be framed as a heterogeneous GW process, where the probability of splitting a node (generating offsprings) depends on the depth of the node in the tree. In particular, using (4) one obtains for and

(10)

The population size at time satisfies and its expectation can be written as

Since under (10), the process is subcritical and thereby it dies out with probability one. This means that the random sequence consists of zeros for all but a finite number of ’s. The overall number of nodes in the tree (all ancestors in the family pedigree)

(11)

is thus finite with probability one. The number of leaves (bottom nodes) can be related to through

(12)

and satisfies

(13)

where is the time of extinction. In (13), we have used the fact that is the depth of the tree, where the lower bound is obtained with asymmetric trees with only one node split at each level and the upper bound is obtained with symmetric full binary trees (all nodes are split at each level).

Regularization is an essential remedy against overfitting and Bayesian procedures have a natural way of doing so through a prior. In the context of trees, the key regularization element is the prior on the number of bottom leaves , which is completely characterized by the distribution of total progeny via (12). Using this connection, in the next section we study the tail bounds of the distribution implied by the Galton-Watson process.

5 Bayesian Tree Regularization

If we knew , the optimal (rate-minimax) choice of the number of tree leaves would be (RP17). When is unknown, one can do almost as well (sacrificing only a log factor in the convergence rate) using a suitable prior . As noted by Coram and Lalley (2006), the tail behavior of is critical for controlling the vulnerability/resilience to overfitting. The anticipation is that with smooth , more rapid posterior concentration takes place when has a heavier tail. However, too heavy tails make it easier to overfit when the true function is less smooth. To achieve an equilibrium, Denison et al. (1998)

suggest the Poisson distribution (constrained to

), which satisfies

(14)

Under this prior, one can show that in probability (RP17). The posterior thus does not overshoot the oracle too much.

In the BART prior, the distribution is implicitly defined through the GW process rather than directly through (14). In order to see whether BART induces a sufficient amount of regularization, we first need to obtain a tail bound of under the GW process and show that it decays fast enough. One seemingly simple remedy would be to set (which coincides with the homogeneous GW case) and with some . Standard branching process theory then implies This prior is more aggressive than (14). Moreover, letting the split probability decay with sample size is counterintuitive. By choosing , on the other hand, one obtains which is not aggressive enough.

While the homogeneous GW processes have been studied quite extensively, the literature on tail bounds for heterogeneous GW processes (for when ) has been relatively deserted. We first review one interesting approach in the next section and then come up with a new bound in Section 5.2.

5.1 Tail Bounds à la Agresti

Agresti (1975)

obtained bounds for the extinction time distribution of branching processes with independent non-identically distributed environmental random variables

.

Theorem 5.1.

(Agresti, 1975) Consider the heterogeneous Galton-Watson branching process with offspring p.g.f.’s satisfying for . Denote . Then

(15)

Using this result, we can obtain a tail bound on the extinction time under the Bayesian CART prior.

Corollary 5.1.

For the heterogeneous Galton-Watson branching process with offspring p.g.f.’s (9) with (10) we have

(16)

for a positive constant that depends on and .

Proof.

We have and for

Thus we have and for . Then we can write

(17)

and

Using (17) and the fact that , we can upper-bound the right hand side of (15) with . ∎

Remark 5.1.

A simpler bound on the extinction time can be obtained using Markov’s inequality as follows:

Using the upper bound in (13) we immediately conclude that

This decay, however, is not fast enough as we would ideally like to show (14). We try a bit different approach in the next section.

5.2 Trees as Random Walks

There is a curious connection between branching processes and random walks (see e.g. Dwass (1969)). Suppose that a binary tree is revealed in the following node-by-node exploration process: one exhausts all nodes in generation before revealing nodes in generation . Namely, nodes are implicitly numbered (and explored) according to their priority and this is done in a top/down manner according to their layer and a left-to-right manner within each layer (i.e. is the root node and, if split, and are the two children (left and right) etc.)

Nodes that are waiting to be explored can be organized in a queue . We say that a node is active at time if it resides in a queue. Starting with one active node at (the root node), at each time we deactivate (remove from ) the node with the highest priority (lowest index) and add its children to . Letting be the number of active nodes at time , one finds that satisfies

and , where are sampled from the offspring distribution. For the homogeneous GW process, is an actual random walk where are iid with a probability generating function (9). For the heterogeneous GW process, is not strictly a random walk in the sense that are not iid. Nevertheless, using this construction one can see that the total population equals the first time the queue is empty:

Linking Galton-Watson trees to random walk excursions in this way, one can obtain a useful tail bound of the distribution of the population size . While perhaps not surprising, we believe that this bound is new, as we could not find any equivalent in the literature.

Lemma 5.1.

Denote by the total population size (11) arising from the heterogeneous Galton-Watson process. Then we have for any

(18)

where and , where nodes are ordered in a top-down left-to-right fashion.

Proof.

For , we can write

where is the number of all nodes (internal and external) in the tree and has a two-point distribution characterized by . Using the Chernoff bound, one deduces that for any

where . ∎

The goal throughout this section has been to understand whether the Bayesian CART prior of Chipman et al. (1998) yields (14) for some . The prior assumes . Choosing in (18), the right hand side will be smaller than , for some suitable , as long as . We note that

Because the split probability decreases only polynomially in depth of , this is not enough to ensure . The optimal decay, however, will be guaranteed if we instead choose

(19)

To conclude, from our considerations it is not clear that the Bayesian CART prior of Chipman et al. (1998) has the optimal tail-bound decay. The following Corollary certifies that the optimal tail behavior can be obtained with a suitable modification of .

Corollary 5.2.

Under the Bayesian CART prior of Chipman et al. (1998) with (19), we obtain (14).

Proof.

Follows from the considerations bove and from (12).

Figure 1: The - trees in two dimensions at various resolution levels.

6 Prior Concentration for BART

One of the prerequisites for optimal posterior concentration (5) is optimal prior concentration (Condition (7)). This condition ensures that there is enough prior support around the truth. It can be verified by constructing one approximating tree and by showing that it has enough prior mass. RP17 use the - approximating tree (Remark 3.1), which is a balanced full binary tree which partitions into nearly identical rectangles (in sufficiently regular designs). This tree can be regarded as the most regular partition that can be obtained by splitting at observed values. A formal definition of the - tree is below and a few two-dimensional examples333Source: https://salzis.wordpress.com/2014/06/28/kd-tree-and-nearest-neighbor-nn-search-2d-case/ (at various resolution levels) are in Figure 1.

Definition 6.1.

(- tree partition) The - tree partition is constructed by cycling over coordinate directions , where all nodes at the same level are split along the same axis. For a given direction , each internal node, say , will be split at a median of the point set (along the axis). Each split thus roughly halves the number of points inside the cell.

After rounds of splits on each variable, all terminal nodes have at least observations, where . The - tree partitions are thus balanced in light of Definition 2.4 of Rockova and van der Pas (2017) (i.e. have roughly the same number of observations inside). The - tree construction is instrumental in establishing optimal prior/posterior concentration. Lemma 3.2 of RP17 shows that there exists a step function supported by a - partition that safely approximates with an error smaller than a constant multiple of the minimax rate. The approximating - tree partition, denoted with , has steps where when (as shown in Section 8.3 of RP17 and detailed in the proof of Theorem 7.1).

In order to complete the proof of posterior concentration for the Bayesian CART under the Galton-Watson process prior, we need to show that for some . This is verified in the next lemma.

Lemma 6.1.

Denote with the - tree partition described above. Assume the heterogeneous Galton-Watson process tree prior with for some suitable . Assume . Then we have for some suitable

Proof.

By construction, the - tree has leaves and layers for some where is the number of predictors. In addition, the - tree is complete and balanced (i.e. every layer , including the last one, has the maximal number of nodes). Since there are internal nodes and at least splitting rules for each internal node, we have

Since and we can lower-bound the above with for some . ∎

For the actual BART method (similarly as in Theorem 5.1 of RP17), one needs to find an approximating tree ensemble and show that it has enough prior support. The approximating ensemble can be found in Lemma 10.1 of RP17 and consists of tree partitions obtained by chopping of branches of . The number of trees is fixed and the trees will not overlap much when . The default BART choice safely satisfies this as long as . The little trees have leaves and satisfy (depending on the choice of ). Using Lemma 6.1 and the fact that the trees are independent a-priori (from (3)) and that is fixed, we then obtain

for some . The BART prior thus concentrates enough mass around the truth. Condition (7) also requires verification that the prior on jump sizes concentrates around the forest sitting on . This follows directly from Section 9.2 of RP17. We detail the steps in the proof of Theorem 7.1.

7 Posterior Concentration for BART

We now have all the ingredients needed to state the posterior concentration result for BART. The result is different from Theorem 5.1 of RP17 because here we (a) assume that is fixed, (b) assume the branching process prior on and (c) we do not have subset selection uncertainty. We will treat the design as fixed and regular according to Definition 3.3 of RP17. Moreover, the BART prior support will be restricted to -valid ensembles with .

Theorem 7.1.

(Posterior Concentration for BART) Assume that is -Hölder continuous with where . Assume a regular design where . Assume the BART prior with fixed and with for . With we have

for any in -probability, as .

Proof.

Section 9. ∎

Theorem 7.1 has very important implications. It provides a frequentist theoretical justification for BART claiming that the posterior is wrapped around the truth and its learning rate is near-optimal. As a by-product, one also obtains a statement which supports the empirical observation that BART is resilient to overfitting.

Corollary 7.1.

Under the assumptions of Theorem 7.1 we have

in -probability, as , for a suitable constant .

Proof.

The proof follows from the proof of Theorem 7.1 and Lemma 1 of Ghosal and van Der Vaart (2007). ∎

In other words, the posterior distribution rewards ensembles that consist of small trees whose size does not overshoot the optimal number of steps by much. In this way, the posterior is fully adaptive to unknown smoothness, not overfitting in the sense of split overuse.

8 Discussion

In this work, we have built on results in Rockova and van der Pas (2017) to show optimal posterior convergence rate of the BART method in the sense. We have proposed a minor modification of the prior that guarantees this optimal performance. Similar results have been obtained for other Bayesian non-parametric constructions such as Polya trees (Castillo (2017)), Gaussian processes (van der Vaart and van Zanten (2008), Castillo (2008)

) and deep ReLU neural networks

(Polson and Rockova, 2018). Up to now, the increasing popularity of BART has relied on its practical performance across a wide variety of problems. The goal of this and future theoretical developments is to establish BART as a rigorous statistical tool with solid theoretical guarantees. Similar guarantees have been obtained for variants of the traditional forests/trees by multiple authors including Gordon and Olshen (1980, 1984); Donoho (1997); Biau et al. (2008); Scornet et al. (2015); Wager and Guenther (2015)

. Our posterior concentration results break the path towards establishing other theoretical properties such as Bernstein-von Mises theorems (semi and non-parametric) and/or uncertainty quantification statements.

9 Proof of Theorem 7.1

The proof follows from Lemma 6.1, Lemma 5.1 and a modification proof of Theorem 5.1 of RP17. Below, we outline the backbone of the proof and highlight those places where the proof of RP17 had to be modified. Our approach consists of establishing conditions (6), (7) and (8) for . The first step requires constructing the sieve . For a given , and a suitably large integer (chosen later), we define the sieve as follows:

(20)

where consists of all functions of the form (2) that are supported on a -valid ensemble . All -valid ensembles consisting of trees of sizes are denoted with . The sieve (20) is different from the one in the proof of Theorem 5.1 of RP17. Their sieve consisted of all ensembles whose total number of leaves was smaller than . Here, we allow for each tree individually to have up to leaves.

Regarding Condition (6), RP17 in Section 9.1 obtain an upper bound on the covering number for as well as the cardinality of which together yield (for some )

(21)

With the choice (for a large enough constant ), fixed and assuming , the Condition 6 will be met.

Next, we wish to show that the prior assigns enough mass around the truth in the sense that

(22)

for some large enough . We establish this condition by finding a lower bound on the prior probability in (22), using only step functions supported on a single ensemble. According to Lemma 10.1 of RP17 there exists a -valid tree ensemble that approximates well in the sense that

for some , where is the Hölder norm and where for some . Next, we find the smallest such that . This value will be denoted by and it satisfies

(23)

Under the assumption we have . Denote by the approximating ensemble described in Section 6. Next, we denote with the vector of tree sizes, where . Then we can lower-bound the left-hand side of (22) with

(24)

where consists of all additive tree functions supported on . In Section 6 we show that . Moreover, RP17 in Section 10.2 show that, for some ,

where and where are the steps of the approximating additive trees from Lemma 10.1 of RP17. This can be further lower-bounded with

(25)

Under the assumption , this term is larger than for some . Since , there exists such that .

Lastly, Condition (8) entails showing that for deployed in the previous paragraph. It suffices to show that

Under the independent Galton-Watson prior on each tree partition, Corollary 5.2 implies that the probability above can be upper-bounded with With and a fixed , we have for large enough.

References

  • Agresti (1975) Agresti, A. (1975). On the extinction times of varying and random environment branching processes. Journal of Applied Probability 12(1), 39–46.
  • Biau et al. (2008) Biau, G., L. Devroye, and G. Lugosi (2008).

    Consistency of random forests and other averaging classifiers.

    The Journal of Machine Learning Research 9, 2015–2033.
  • Bleich et al. (2014) Bleich, J., A. Kapelner, E. George, and S. Jensen (2014). Variable selection for BART: an application to gene regulation. The Annals of Applied Statistics 4(3), 1750–1781.
  • Breiman et al. (1984) Breiman, L., J. Friedman, C. Stone, and R. A. Olshen (1984). Classification and Regression Trees (Wadsworth Statistics/Probability). Chapman and Hall/CRC.
  • Castillo (2008) Castillo, I. (2008). Lower bounds for posterior rates with Gaussian process priors. Electronic Journal of Statistics 2, 1281–1299.
  • Castillo (2017) Castillo, I. (2017). Pólya tree posterior distributions on densities. In Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, Volume 53, pp. 2074–2102. Institut Henri Poincaré.
  • Chipman et al. (1998) Chipman, H., E. George, and R. McCulloch (1998). Bayesian CART model search. Journal of the American Statistical Association 93(443), 935–948.
  • Chipman et al. (2010) Chipman, H., E. George, and R. McCulloch (2010). BART: Bayesian additive regression trees. The Annals of Applied Statistics 4(1), 266–298.
  • Chipman et al. (2016) Chipman, H., E. George, R. McCulloch, and T. Shively (2016). High-dimensional nonparametric monotone function estimation using BART. arXiv preprint arXiv:1612.01619.
  • Coram and Lalley (2006) Coram, M. and S. Lalley (2006). Consistency of Bayes estimators of a binary regression function. The Annals of Statistics 34(3), 1233–1269.
  • Cox (1993) Cox, D. (1993).

    An analysis of Bayesian inference for nonparametric regression.

    The Annals of Statistics, 903–923.
  • de Jonge and van Zanten (2013) de Jonge, R. and J. van Zanten (2013).

    Semiparametric Bernstein-?von Mises for the error standard deviation.

    Electronic Journal of Statistics 7(1), 217–243.
  • Denison et al. (1998) Denison, D., B. Mallick, and A. Smith (1998). A Bayesian CART algorithm. Biometrika 85(2), 363–377.
  • Diaconis and Freedman (1986) Diaconis, P. and D. Freedman (1986). On the consistency of Bayes estimates. The Annals of Statistics 14(1), 1–26.
  • Donoho (1997) Donoho, D. (1997). CART and best-ortho-basis: a connection. Annals of Statistics 25, 1870–1911.
  • Dwass (1969) Dwass, M. (1969). The total progeny in a branching process and a related random walk. Journal of Applied Probability 6(3), 682–686.
  • Ghosal and van Der Vaart (2007) Ghosal, S. and A. van Der Vaart (2007). Convergence rates of posterior distributions for noniid observations. The Annals of Statistics 35(1), 192–223.
  • Gordon and Olshen (1980) Gordon, L. and R. Olshen (1980). Consistent nonparametric regression from recursive partitioning schemes.

    Journal of Multivariate Analysis

     10, 611–627.
  • Gordon and Olshen (1984) Gordon, L. and R. Olshen (1984). Almost sure consistent nonparametric regression from recursive partitioning schemes. Journal of Multivariate Analysis 15, 147–163.
  • Gramacy and Lee (2008) Gramacy, R. and H. Lee (2008). Bayesian treed Gaussian process models with an application to computer modeling. Journal of the American Statistical Association 103(483), 1119–1130.
  • Hahn et al. (2017) Hahn, P., J. Murray, and C. Carvalho (2017). Bayesian regression tree models for causal inference: regularization, confounding, and heterogeneous effects.
  • Hill (2011) Hill, J. (2011). Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics 20(1), 217–240.
  • Lakshminarayanan et al. (2013) Lakshminarayanan, B., D. Roy, and Y. Teh (2013).

    Top-down particle filtering for Bayesian decision trees.

    In International Conference on Machine Learning.
  • Lakshminarayanan et al. (2014) Lakshminarayanan, B., D. Roy, and Y. Teh (2014). Mondrian forests: Efficient online random forests. In Advances in Neural Information Processing Systems (NIPS).
  • Linero and Yang (2017) Linero, A. and Y. Yang (2017). Bayesian regression tree ensembles that adapt to smoothness and sparsity. arXiv preprint arXiv:1707.09461.
  • Liu et al. (2018) Liu, Y., V. Rockova, and Y. Wang (2018). ABC variable selection with Bayesian forests. arXiv preprint arXiv:1806.02304.
  • Polson and Rockova (2018) Polson, N. and V. Rockova (2018).

    Posterior concentration for sparse deep learning.

    Advances in Neural Information Processing Systems (NIPS).
  • Pratola et al. (2017) Pratola, M., H. Chipman, E. George, and R. McCulloch (2017). Heteroscedastic BART using multiplicative regression trees. arXiv preprint arXiv:1709.07542.
  • Rockova and van der Pas (2017) Rockova, V. and S. van der Pas (2017). Posterior concentration for Bayesian regression trees and their ensembles. arXiv preprint arXiv:1708.08734.
  • Roy and Teh (2008) Roy, D. and Y. Teh (2008). The Mondrian process. In Advances in Neural Information Processing Systems (NIPS).
  • Scornet et al. (2015) Scornet, E., G. Biau, and J. Vert (2015). Consistency of random forests. Annals of Statistics 43, 1716–1741.
  • Taddy et al. (2011) Taddy, M., R. B. Gramacy, and N. Polson (2011). Dynamic trees for learning and design. Journal of the American Statistical Association 106(493), 109–123.
  • van der Pas and Rockova (2017) van der Pas, S. and V. Rockova (2017). Bayesian dyadic trees and histograms for regression. Advances in Neural Information Processing Systems (NIPS).
  • van der Vaart and van Zanten (2008) van der Vaart, A. and J. van Zanten (2008). Rates of contraction of posterior distributions based on Gaussian process priors. The Annals of Statistics 36(3), 1435–1463.
  • Wager and Guenther (2015) Wager, S. and W. Guenther (2015). Adaptive concentration of regression trees with application to random forests. Manuscript.