Best-scored Random Forest Density Estimation

05/09/2019 ∙ by Hanyuan Hang, et al. ∙ 0

This paper presents a brand new nonparametric density estimation strategy named the best-scored random forest density estimation whose effectiveness is supported by both solid theoretical analysis and significant experimental performance. The terminology best-scored stands for selecting one density tree with the best estimation performance out of a certain number of purely random density tree candidates and we then name the selected one the best-scored random density tree. In this manner, the ensemble of these selected trees that is the best-scored random density forest can achieve even better estimation results than simply integrating trees without selection. From the theoretical perspective, by decomposing the error term into two, we are able to carry out the following analysis: First of all, we establish the consistency of the best-scored random density trees under L_1-norm. Secondly, we provide the convergence rates of them under L_1-norm concerning with three different tail assumptions, respectively. Thirdly, the convergence rates under L_∞-norm is presented. Last but not least, we also achieve the above convergence rates analysis for the best-scored random density forest. When conducting comparative experiments with other state-of-the-art density estimation approaches on both synthetic and real data sets, it turns out that our algorithm has not only significant advantages in terms of estimation accuracy over other methods, but also stronger resistance to the curse of dimensionality.



There are no comments yet.


This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Owing to the rapid development of computation and the consequent emergence of various types of data, effective tools to deal with data analysis are in great demand. Among those tools, density estimation which aims at estimating the underlying density of an unknown distribution through observations drawn independently from that distribution has been attached paramount importance in many fields of science and technology Fraley and Raftery (2002)

. This broad attention is a direct result of the fact that the density estimation does not learn for its own sake, but rather facilitate solving some higher level tasks, such as assessing the multimodality, skewness, or any other structure in the distribution of the data

Scott (2015); Silverman (1986), summarizing the Bayesian posteriors, classification and discriminant analysis Simonoff (1996), and being proved useful in Monte Carlo computational methods like bootstrap and particle filter Doucet et al. (2001)

. Other applications, especially in the computer vision society, include image detection

Ma et al. (2015); Liu et al. (2016); Wang et al. (2018), gesture recognition Chang (2016), image reconstruction Ihsani and Farncombe (2016), deformable 3D shape matching Vestner et al. (2017), image defogging Jiang et al. (2017), hyperspectral unmixing Zhou et al. (2018), just to name a few.

Decades have witnessed vast literature on finding different appropriate methods to solve density estimation problems and nonparametric density estimations have become the focus of attention since weaker assumptions are applied to the underlying probability distribution

Hwang et al. (1994); Härdle et al. (2012). Histogram density estimation, being a simple and convenient estimation method, has been extensively studied as the most basic form of density estimation Freedman and Diaconis (1981); Lugosi and Nobel (1996). Although consistency Glick (1973); Gordon and Olshen (1978, 1980) and a strong universal consistency Devroye and Györfi (1983)

of the histograms are established, they are suboptimal for not being smooth. Moreover, the non-smoothness of histogram and therefore insufficient accuracy brings great obstacles to practical applications. Taking the smoothness into consideration, the machine learning society turns to another popular strategy called the kernel density estimation (KDE), which is also termed as

Parzen-Rosenblatt estimation Parzen (1962); Rosenblatt (1956). This method gains its prevalence when dealing with cases where the density is assumed to be smooth and the optimal convergence rates can be achieved with kernel and bandwidth chosen appropriately Hang et al. (2018)

. However, these optimal rates depend on the order of smoothness of the density function on the entire input space while the actual cases may be that the smoothness of density function varies from areas to areas. In other words, KDE lacks local adaptivity, and this often leads to a large sensitivity to outliers, the presence of spurious bumps, and a tendency to flatten the peaks and valleys of the density

Terrell and Scott (1992); Botev et al. (2010). Nevertheless, this method undergoes a high computational complexity since the computation time grows linearly with the number of samples increasing. Other density estimation strategies published so far include estimators based on wavelet Doukhan and León (1990); Kerkyacharian and Picard (1992), mixtures of models Roeder and Wasserman (1997); Ghosal (2001); Escobar and West (1995), just to name a few. It is worth noting that the above mentioned methods can barely escape from the curse of dimensionality for their unsatisfying performance for moderate to large dimension. To the best of our knowledge, it is a challenge for an algorithm to have the theoretical availability for both local and global analysis, the experimental advantages of achieving efficient and accurate prediction results on real data, and stronger resistance to the curse of dimensionality compared to the existing common algorithms.

Committed to conquering the challenge, we propose a random-forest-based density estimation method named the best-scored random forest density estimation. By taking full advantage of the purely random splitting criterion and the ensemble nature of a forest consisting of purely random trees, we are able to construct an algorithm not only achieving fast convergence rates, but also a desirable asymptotic smoothness beneficial for prediction accuracy. Moreover, since the local and global analysis of the random forests are in essence the same, so is our algorithm. The algorithm starts with partitioning the feature space into non-overlapping cells following the purely random splitting criterion where at each step, the to-be-split cell and its corresponding cut point are chosen uniformly at random. Then, the inherent randomness within the partitions allows us to build different random density trees and pick out the one with the best experimental performance as a best-scored tree in the forest. We name this selection mechanism the best-scored method. Last but not least, by integrating trees generated by the above procedure, we obtain a density forest with satisfying asymptotic smoothness.

The contributions of this paper come from both the theoretical and experimental aspects: (i) Considering from a theoretical perspective, our best-scored random forest density estimation shows its preponderance in achieving fast convergence rates under some mild conditions. Different from the commonly utilized -distance for measuring the difference between the nonparametric density estimator and the underlying density function, we regard -distance as a more reasonable choice concerning with its invariance under monotone transformations, being well-defined as a metric on the density functions space and better visualization of the closeness to the ground-truth density function than -distance. Besides, we also carry out analysis under the -distance for its ability to measure the worst-case goodness-of-fit of the estimator. Based on these two types of distance, we manage to establish consistency and fast convergence rates for the best-scored random density trees and forest. In the analysis, the error term is decomposed into data-free and data-dependent error terms which are handled by employing techniques from the approximation theory and empirical process theory, respectively. Our theoretical advantages are essentially twofold: First, the underlying density function is only assumed to be -Hölder continuous which is a weak and natural assumption for nonparametric density estimation in the literature. Second, fast convergence rates are established under certain common tail assumptions on the distribution which are rigorously calculated step by step according to our purely random splitting mechanism. (ii) Experimental improvements of the algorithm architecture are made for better numerical performance and their effectiveness is later verified by both synthetic and real data analysis. First of all, we adopt an adaptive random partition method instead of the original random splitting criterion where the cell selection process is data-driven. To be concrete, at each step, we pick up a certain number of sample points from the entire training set uniformly at random and then choose the cell which most of these samples fall in as the to-be-split one. In this manner, sample-dense areas are more likely to be split more whereas sample-sparse areas are possible to be split less, which not only increases the effective number of splits, but also helps to obtain cells with sample sizes evenly distributed. Secondly, concerning with the fact that the partitions theoretically studied are axis-parallel and may not be that accurate since it has to approximate the correct model with staircase-like structure when the underlying concept is a polygonal space partitioning in practice. Therefore, we propose a best-scored random forest density estimation induced by the oblique purely random partitions and it does improve the prediction accuracy. Thirdly, when conducting real data analysis, our algorithm is predominant in accuracy for it has much more free parameters tunable, trains faster than other classical machine learning methods when the sample volume is large, can be even speed up for it inherits the parallelism of random forests and significantly more resistant to the curse of dimensionality than any other methods in comparison. As a result, the noteworthy advantages of experimental accuracy and training time further demonstrate the effectiveness and efficiency of our algorithm.

This paper is organized as follows. In Section 2, we lay out some required fundamental notations and definitions concerned with the best-scored random density forest. Main results on the consistency and convergence rates under the -norm and the -norm of the estimators are provided in Section 3. Some comments and discussions related to the main results will be also presented in this section. Section 4 is devoted to the main analysis on bounding the error terms. Numerical experiments of comparisons between different density estimation methods based on both synthetic and real data sets are provided in Section 5. For the sake of clarity, we place all the proofs of Section 3 and Section 4 in Section 6. We close this paper in Section 7 with several concluding remarks.

2 Methodology

We dedicate this section to the methodology of our best-scored random forest density estimation. To this end, we begin by introducing some notations that will be used throughout. Then, we give explicit description of the purely random partitions that our density trees and thus forest are based on. The architecture of our best-scored random density trees and then forest are presented in Sections 2.3 and 2.4, respectively.

2.1 Notations

Let be a subset, be the Lebesgue measure with , and be a probability measure with support which is absolute continuous with respect to with density . We denote as the centered hypercube of with side length , that is

and write for the complement of . Throughout this paper, we use the notation to denote that there exists a positive constant such that , for all .

2.2 Purely Random Partitions

In this subsection, we introduce the purely random partition which is the foundation of establishing our best-scored random density trees and then forest. This partition follows the idea put forward by Bremain (2000)

of the construction of purely random forest. To give a better understanding of one possible general building procedure of the random partition, a random vector

is set up to describe the splitting mechanism at the th step of the partition. For definiteness, let in the triplet denote the to-be-split cell at the th step chosen uniformly at random from the candidates which are defined to be all the cells presented in the

th step. In this way, the cell choosing procedure follows a recursive manner. The second random variable

in the triplet denotes the dimension chosen to be split from for cell and are independent and identically multinomial distributed with each dimension having equal probability to be chosen. The random variable serves as a proportional factor representing the ratio between the length of the newly generated cell in the th dimension after the th split and the length of the being-cut cell in the th dimension. That is to say, the length of the newly generated cell in the th dimension is the product of the length of in the th dimension and the proportional factor . Note that are independent and identically distributed drawn from .

The above mentioned statements mathematically formulate the splitting process of the purely random tree. However, one simple example may provide a clearer understanding of the whole procedure. To be specific, we assume that the partition is carried out on , . First of all, we randomly select one dimension out of candidates and uniformly split at random from that dimension. The split is a

-dimensional hyperplane parallel to the axis so that

is split into two cells which are and , respectively. Then, a cell is chosen uniformly at random, say , and we conduct random split on it with dimension and cut point chosen randomly, which leads to a partition of consisting of . Next, we randomly pick one cell from all three cells formed in the last step, say , and the split is conducted on it as before, which leads to a partition consisting of . The building process continues in this manner until the number of splits meets our satisfaction. Moreover, the above procedure leads to a so-called partition variable taking value in space . We denote by the probability measure of .

It is worth pointing out that any specific partition variable can be treated as a splitting criterion. The collection of non-overlapping cells formed by following for splits on is denoted by which can be further abbreviated as , and we define . We also note that if we focus on certain sample point , then the corresponding cell where that point falls is denoted by .

2.3 Best-scored Random Density Trees

In this subsection, we formulate the best-scored random density tree (BRDT) based on the above mentioned random partitions . We first introduce how to build a density tree based on purely random partition, then incorporate the best-scored method into the construction of trees, which leads to our best-scored random density trees.

2.3.1 Purely Random Density Tree

In order to characterize the purely random density tree estimators, we propose the following definition formalizing the general form of random partition.

[Random Partition] For a fixed , let be a random splitting criterion of . The collection of non-overlapping sets derived by partitioning following for splits is called a -split random partition. And each element in is called a cell of the random partition.

Now, we introduce the random density tree with respect to certain probability measure. There is no loss of generality in assuming that for all , the Lebesgue measure , since the density estimation at is set to be if . From now on, this assumption will hold without repetition.

[Random Density Tree of a Measure] Let be a probability measure on . For a fixed , let be a -split random partition of . Then, the function defined by


is called a random density tree of .

In the following, we write instead of for abbreviation. Here, we demonstrate that defines the density of a probability measure on for is measurable and


Moreover, for , , we have

which also holds for .

Recalling that is a probability measure on with the corresponding density function , by taking with , then for , we have


In other words, in is the average density on . Furthermore, for , there exists exactly a number such that . In the following, we write . Then, for with ,


Specifically, when is the empirical measure , then is the expectation of with respect to , which is

For , the random density tree in this study can be expressed as


where can also be written as . The summation on the right-hand side of (5) counts the number of observations falling in . From now on, for notational simplicity, we will suppress the subscript of and denote , e.g., . The map from the training data to is called a random density tree rule with random partition .

2.3.2 The Best-scored Method

We should attach great importance to the fact that the prediction performances of density trees induced by purely random partitions might not be that satisfying since the partitions completely make no use of the sample information. Therefore, the prediction results of their ensemble forest may not be accurate enough. Committed to improving prediction accuracy, we provide a selection process for the partitioning of each tree. Concretely speaking, the partition chosen for tree construction is the one with the best density prediction performance in terms of the Average Negative Log-Likelihood (ANLL) (to be mentioned later in Section 5.4) from partition candidates. This process is named as the best-scored method and the resulting trees are called the best-scored random density trees.

2.4 Best-scored Random Density Forest

In this subsection, we formulate the best-scored random density forest. Ensembles consisting of a combination of different estimators have been highly recognized as an effective technique for the significant performance improvements over single estimator in the literature, which inspires us to apply them to our best-scored density trees. In our cases, we first train several best-scored density trees basing on different random partitions, separately; once this is accomplished, the outputs of the individual estimators are combined to give the ensemble output for new data points. Here, we use the simplest possible combination mechanism by taking uniform weighted average.

Let , be the best-scored random density tree estimators generated by the splitting criteria respectively, which is defined by

where is a random partition of . Therefore, with , the best-scored random density forest can be presented as


3 Main Results and Statements

In this section, we present main results on the consistency and convergence rates of our density estimators. To be precise, consistency and convergence rates of the best-scored random density trees under -norm are given in Sections 3.1 and 3.2, respectively. Convergence rates of the best-scored random density trees under -norm are presented in Section 3.3. Based on the results of those base density estimators, the convergence rates of the best-scored random density forest under -norm and -norm are established in Section 3.4. Finally, comments and discussions concerned with the established main results are given in Section 3.5.

3.1 Results on Consistency

We establish results on the consistency property of the best-scored random density tree estimator in the sense of -norm. To clarify, an estimator is said to be consistent in the sense of -norm if converges to under -norm -almost surely.

For , let be a random partition with number of splits . If

then the best-scored random density tree estimator is consistent in the sense of -norm.

3.2 Results on Convergence Rates under -Norm

In this subsection, we establish the convergence rates of the best-scored random density tree estimators under -norm with three different tail assumptions imposed on . In particular, analysis will be conducted in situations where the tail of the probability distribution has a polynomial decay, exponential decay and disappears, respectively.

For , let be a random partition of . Moreover, assume that the density is -Hölder continuous. We consider the following cases:

  • for some and for all ;

  • for some , and for all ;

  • for some .

For the above cases, if , and the sequences are of the following forms:

  • ;

  • ;

  • ;

then with probability at least , there holds

where the convergence rates

  • ;

  • ;

  • .

3.3 Results on Convergence Rates under -Norm

We now state our main results on the convergence rates of to under -norm.

For , let be a random partition of . Moreover, assume that there exists a constant such that and the density function is -Hölder continuous with . Then for all , by choosing

with probability at least , there holds


where and .

3.4 Convergence Rates for Best-scored Random Density Forest

Basing on the results of the base density estimators, we obtain the convergence rates of the best-scored random density forest estimators under - and -norm, respectively. Here, we still consider three different tail probability distributions as in Theorem 3.2.

For , let , be random partitions of generated by the splitting policies respectively. Moreover, assume that the density is -Hölder continuous. We consider the following cases:

  • for some and for all ;

  • for some , and for all ;

  • for some .

For the above cases, if , and the sequences are of the following forms:

  • ;

  • ;

  • ;

where the number of splits are the same for each partition in . Then with probability at least , there holds

where the convergence rates

  • ;

  • ;

  • .

In the following theorem, we obtain the convergence rates of the best-scored random density forest estimators with respect to the -norm.

For , let , be random partitions of generated by the splitting policies respectively. Moreover, assume that there exists a constant such that and the density function is -Hölder continuous with . Then for all , by choosing the same number of splits

for each partition in . Then with probability at least , there holds

where and .

3.5 Comments and Discussions

In this section, we present some comments on the obtained theoretical results concerning with the consistency and convergence rates of the best-scored random density tree estimators and the best-scored random density forest estimators.

Trying to alleviate the disadvantages of traditional histogram density estimators, such as their heavy dependence on fixed bin widths and their inevitable discontinuity, we propose to establish the best-scored random density forest estimators based on random partitions and integrate several base estimators to give a smoothed density estimator. Recall that all the estimators presented in this paper are nonparametric density estimators, the criterion measuring their goodness-of-fit does matter. Commonly used measures include -distance, -distance, -distance. In Devroye and Györfi (1985), authors provide a especially lucid statement of the mathematical attractions of distance: it is always well-defined as a metric on the space of density functions; it is invariant under monotone transformations; and it is proportional to the total variation metric. As for the -distance, if we regard -distance as the measure of the overall performance, then the -distance measures the goodness-of-fit at each point in the feature space, thus it is stronger. We highlight that in our analysis, the convergence rates of the base estimators and the ensemble estimators have all been considered under -norm and -norm, respectively.

On the other hand, due to the fact that these best-scored random density tree estimators are all based on random partitions, we should combine the probability distribution of with the probability distribution of the partition space, which leads to the use of in the analysis of consistency and convergence rates. In virtue of the randomness resided in the partitions, the effective number of splits is smaller than that of the deterministic partitions. As a result, in order to obtain the consistency of a best-scored random density tree estimator, the number of splits should be larger so that the resulting cell sizes can be smaller. Moreover, we establish the convergence rates in the sense of -norm of the best-scored random density forest estimators, namely, , where and . It is noteworthy that the assumptions needed to establish convergence rate under -norm are not that stronger than assumptions under -norm. To be specific, we only need to add two mild assumptions to the original one, which are that the density should be compactly supported and bounded.

As is mentioned in the introduction, there are a flurry of studies in the literature that address the density estimation problem. Specifically, there are other theoretical studies of histogram density estimations. For example, Lugosi and Nobel (1996) conducts histogram density estimations based on data-dependent partitions where a strong consistency in the sense of -norm is obtained under general sufficient conditions. Klemelä (2009) presents a multivariate histograms based on data-dependent partitions which are obtained by minimizing a complexity-penalized error criterion. The convergence rates obtained in his study are of the type with respect to the -norm under the assumption that function belongs to an anisotropic Besov class. Moreover, it can be regarded as a particular case of our proposal considering its partition process. For kernel density estimation, Jiang (2017) presents that under the assumption of -Hölder continuity, the convergence rates obtained are of the type .

4 Error Analysis

4.1 Bounding the Approximation Error Term

For , let be a random partition of .

  • For any , there exists such that for any , we have

    with probability at least .

  • If is -Hölder continuous, then for all where is the constant of the -Hölder continuity, there holds

    with probability at least .

For , let be a random partition of . Then, for , we have

4.2 Bounding the Estimation Error Term

4.2.1 A Fundamental Lemma

The following lemma shows that both of the and -distance between and can be estimated by the quantities .

Let be a random partition of . Then the following equalities hold:

  • .

  • , where we denote .

4.2.2 Bounding the Capacity of the Function Set

[Covering Numbers] Let be a metric space, and . We call an -net of if for all there exists an such that . Moreover, the -covering number of is defined as

where denotes the closed ball in centered at with radius .

Let be fixed. Let be a partition of with number of splits and denote the collection of all partitions . Further, we define


Let be a class of subsets of and be a finite set. The trace of on is defined by . Its cardinality is denoted by . We say that shatters if , that is, if for every , there exists a such that For , let

Then, the set is a Vapnik-Cervonenkis (VC) class if there exists such that and the minimal of such is called the index of , and abbreviated as .

The VC index of can be upper bounded by .

Let be a class of subsets of , denote as the collection of the indicator functions of all , that is, . Moreover, as usual, for any probability measure , is denoted as the space with respect to equipped with the norm .

Let be defined as in (8). Then, for all , there exists a universal constant such that


holds for any probability measure .

4.2.3 Oracle Inequalities under -Norm, and -Norm

Let be a random partition of . Then, for all , , and , with probability at least , there holds

Let be a random partition of . Assume that there exists a constant such that and the density function satisfies . Then for all and , with probability at least , there holds


5 Numerical Experiments

In this section, we present the computational experiments that we have carried out. Aiming at obtaining more efficient partition performance, we improve the purely random splitting criteria to new ones named as the adaptive random splitting criteria in Section 5.1. Concerning with the fact that the partitions currently discussed are all axis-parallel and their induced density estimators may not be accurate enough for some cases, we extend them to the oblique random partitions in Section 5.2. Based on the adaptive random partitions, we construct our best-scored random density forest in Section 5.3. Then we compare our approach with other proposals illustrated in Section 5.4 both on synthetic data in Section 5.5 and real data in Section 5.6.

5.1 Improvement with Adaptive Method

It is worth pointing out one crucial fact that the purely random trees may face the dilemma, the effective number of splits being relatively small. The reason for this phenomenon is that the purely random splitting criteria make no use of the sample information. Therefore, we propose an adaptive splitting method efficiently taking sample information into consideration. Since we have only discussed on partitions that are axis-parallel, the corresponding new criterion is called the adaptive axis-parallel random splitting criterion.

Recall that for the axis-parallel purely random partition, in the random vector denotes the randomly chosen cell to be split at the th step of the tree construction. However, on account that this choice of does not make any use of the sample information, it may suffer from over-splitting in sample-sparse areas and under-splitting in sample-dense areas. Hence, we propose that when choosing a to-be-split cell, we first randomly select samples from the training data set and find out which cell most of these samples fall in. Then, we pick up this cell as . This idea comes from the fact that when randomly picking sample points from the whole training data set, cells with more samples are more possible to be selected while cells with fewer samples are less likely to be chosen. To mention,

as a hyperparameter can be tuned according to specific conditions.

We mention that the “adaptive” here can be illustrated from the perspective of partition results. Splits are conducted according to the sample distribution where sample-dense areas will be split more, while sample-sparse areas will be split less. In this way, we develop an adaptive axis-parallel random partition.

5.2 An Adaptive Oblique Partition

So far we have considered cases where each split in the partition process is conducted from only one dimension of the feature space, i.e. all splits are axis-parallel. However, in order to achieve better experiments results, we now extend our adaptive axis-parallel random splitting criterion in Section 5.1 to a more advanced criterion where splits can be oblique and the locations of them are data-driven. This new criterion is called the adaptive oblique random splitting criterion. To mention, the “adaptive” implies that each step in the construction procedure is adaptive to the sample distribution. The competitiveness of this new splitting criterion can be verified by the experimental analysis on real data.

Here, we illustrate a possible construction approach of one tree by following the adaptive oblique random splitting criterion. Still, some randomizing variables are needed for a clear description. The oblique partition process at the th step can be represented by . When conducting the adaptive partition method, we first randomly select samples from the training data set and each of these sample points is certain to fall into one of the cells formed at the th step. Therefore, we can find out which cell most of these samples fall in and choose this cell as the to-be-split cell . Secondly, the coordinates of samples falling into cell in samples are recorded and thus the barycenter of cell can be substituted by the centroid of these samples in experiments. Thirdly, since we follow an oblique splitting rule, the split performed on is actually a part of the chosen hyperplane , . For the experimental convenience, we set normal vectors of hyperplanes to be independent and identically distributed from and . Till now, we finish the construction of the th step, and by following this procedure recursively, we are able to establish a random tree with oblique partitions.

It can be clearly observed from the establishment of one tree estimator (1) in the best-scored random density forest that after obliquely partitioning the feature space into non-overlapping cells which are irregular polyhedrons, we are in need of the volume of each cell. In general, the method of computing the volume of an irregular polyhedron is mainly to decompose the polyhedron into a plurality of solvable polyhedrons. However, this approach is not a wise choice for its high computational complexity especially when the dimension of the space where the polyhedron is embedded is high. Take the volume computation of one polyhedron for example, we need to determine the specific coordinates of each vertex of the polyhedron to select an appropriate polyhedron decomposition method. Moreover, this approach provides an exact value for the polyhedral volume. Exact value of volume is not a must since our density estimator is itself an approximate to the ground-truth density. In fact, good approximations of polyhedron volumes are enough for our algorithm when carrying out experiments. Therefore, we employ another well-known volume estimation method which is the Monte Carlo method relying on repeated random sampling to obtain numerical results. The specific procedure can be stated as follows: First of all, we find the smallest hypercube in the feature space that contains all the training data. The side length of this hypercube in each dimension is the difference between the maximum and minimum values of the training sample’s coordinates in this dimension, so that the volume of the hypercube can be easily obtained. Secondly, we generate

points by default according to the uniform distribution on the area where the hypercube is located, and record the ratio of the number of points falling into each cell to the number of all points, or called frequency. Lastly, the volume of each cell is calculated by the volume of the hypercube multiplied by the corresponding frequency. On account that the Monte Carlo method is easy to operate and gives a good estimate of the volume of any irregular polyhedron in any dimension, we employ this approach for our algorithm under oblique partitions.

5.3 Best-scored Density Estimators

Having demonstrated how to perform an adaptive splitting criterion for both axis-parallel and oblique partitions, we now come to the discussion on details of the construction of one best-scored random tree. First of all, we generate -split adaptive random partitions and our main purpose is to select one partition with the best density estimation performance out of the candidates by a -fold cross-validation. Now take the first round of the -fold cross-validation as an example. For each of the partition candidates, training set of the cross-validation is used to give weight to each cell of that partition according to (1) and the corresponding validation error of the partition based on the validation set is then computed. After traversing all ten rounds, we are able to obtain the average validation error for each of the partition candidates, and the one with the smallest average validation error is the exact partition for one tree. Based on this selected partition, we give weight to each cell in accordance with (1) based on the whole training set. By repeating the above establishment procedure for times, we are able to obtain a best-scored random forest for density estimation containing trees.

5.4 Experimental Setup

In our experiments, comparisons are conducted among our axis-parallel best-scored random density forest (BRDF-AP), oblique best-scored random density forest (BRDF-OB) and other effective density estimation methods, which are

  • KDE: the kernel density estimators Parzen (1962); Rosenblatt (1956) where we take the Gaussian kernel.

  • DHT: the discrete histogram transform López-Rubio (2014) is a nonparametric approach based on the integration of several multivariate histograms which are computed over affine transformations of the training data.

  • rNADE: the real-valued neural autoregressive distribution estimation models Uria et al. (2013, 2016)

    , which are neural network architectures applied to the problem of unsupervised distribution and density estimation.

It is worth pointing out that for KDE, we utilize the Python-package SciPy with default settings, for DHT, López-Rubio (2014) provides the codes in Matlab and for rNADE, Uria et al. (2016) also provides codes in Python. All the following experiments are performed on computer equipped with 2.7 GHz Intel Core i7 processor, 16 GB RAM.

In order to provide a quantitative comparisons of options, we adopt the following Mean Absolute Error:


where are test samples. It is mainly used in cases where the true density function is known. Though MAE can not be used in tests of real data whose true density is unknown, it is still especially suitable for synthetic data experiments.

Another effective measure of estimation accuracy, especially when facing real data, is measured over test samples, which is given by the Average Negative Log-Likelihood:


where represents the estimated probability density for the test sample and the lower the ANLL is, the better estimation we obtain. To mention, we employ -fold cross validated ANLL as our test error. One thing that has been attached great importance is that whenever the estimation function of any samples returns zero weight, ANLL will go to infinity, which is undesirable. Therefore, we substitute all density estimation with , where is a infinitesimal number that can be recognized by the computer which can be obtained by function numpy.spacing(1) in Python, or constant eps in Matlab. Consequently, we come to a desirable state where one bad sample point (with zero estimated probability) will not harm the whole good ANLL much.

5.5 Synthetic Data

In this subsection, we start by applying our BRDF and other above mentioned density estimation methods on several artificial examples. In order to give a more comprehensive understanding of our algorithm architecture illustrated in Section 2, we first consider BRDF with axis-parallel partition (BRDF-AP) here. To be specific, we base the simulations on two different types of distribution construction approaches with each type generating four toy examples with dimension , respectively. To notify, the premise of constructing data sets is that we assume that the components of the random vector are independent of each other with identical margin distribution. Therefore, we only present the margin density in the following descriptions.

  • Type I: ,

  • Type II: .

It can be apparently seen that the construction of example of Type I is piecewise constant and example of Type II is based on mixture models with beta distributions and uniform distributions. We emphasize that the densities of both Types I and II datasets are compactly supported and bounded. In order to give clear descriptions of the distributions, we give the 3D plots of the above two types of distributions with dimension

shown in Figure 1.

Figure 1: 3D plots of the true probability densities with
of datasets of ”Type I" (left), ”Type II" (right).

Table 2 summarizes the average MAE performances of our model, KDE, DHT and rNADE. All experiments presented are repeated for times, and we present the average results here. It can be apparently observed from the Table 1 that our BRDF-AP has the best performances w.r.t. MAE on almost all data sets, which further demonstrates the effectiveness of the algorithm.

Type I 1 0.11 0.26 0.16 0.48
2 0.39 0.88 0.51 1.07
3 1.19 2.05 1.25 1.95
5 5.14 6.95 5.27 5.72
Type II 1 0.28 0.17 0.14 0.57
2 0.35 0.47 0.45 1.86
3 0.76 0.96 1.08 1.87
5 2.79 2.90 3.98 4.41
  • * The best results are marked in bold.

Table 1: Average MAE over the Two Types of Synthetic Data Sets

5.6 Real Data Analysis

In order to obtain better experimental performances for real data analysis, we adopt BRDF with both axis-parallel and oblique partitions. Empirical comparisons on ANLL and training time among BRDF-AP, BRDF-OB, KDE, DHT and rNADE are based on data sets from the UCI Repository of machine learning databases: Wine quality data set, Parkinsons telemonitoring data set and Ionosphere data set. Since data contained in the Wine quality data set are actually twofold, which are Red wine and White wine, we conduct each data set separately.

Parkinsons (5875,15) 0.51 3.25 -0.67 2.32 8.27 3.70 11.22 9.41 5.26 35.93
Ionosphere (351,32) 0.06 19.61 4.28 18.81 24.36 0.13 26.20 0.47 9.48 52.43
Red wine (1599, 11) -0.03 1.28 -0.19 1.13 11.57 0.31 11.63 0.90 10.66 41.02
White wine (4898, 11) -0.61 2.40 -1.20 1.91 11.49 1.90 11.97 4.56 10.84 39.99
  • * The best results are marked in bold.

Table 2: Average ANLL and Training Time (in Seconds) over the Test Sets for Four UCI Data Sets

Some data preprocessing approaches are in need for the following analysis. According to Tang et al. (2012), not only discrete-valued attributes but an attribute from every pair with a Pearson correlation coefficient greater than

are eliminated. Moreover, all results are reported on the normalized data where each dimension of the data is subtracted its training-subset sample mean and divided by its standard deviation.

The quantitative results are given in Table 2. In this table, the sample size is denoted by and the data dimensionality is denoted by . Careful observations will find that both our BRDF-AP and BRDF-OB have significantly smaller ANLLs than any other standard method on all four data sets. This advantage in estimation accuracy may be attributed to both the general architecture of random forest and the BRDF’s unique property of having many tunable hyperparameters. In particular, BRDF-OB has even better estimation performance than BRDF-AP, which demonstrates the effectiveness in employing oblique partitions for forest construction. When focusing on the Ionosphere data set, we find that it takes BRDF-AP and BRDF-OB the longest time to train the models, though Ionosphere has the smallest sample size among all data sets. This phenomenon comes from the fact that we take to obtain good ANLLs on Ionosphere while smaller is enough to provide satisfying results on other data sets. We should be aware that larger will bring smoother density estimators and therefore better ANLL results, but it also takes longer time, which reflects a trade-off between estimation accuracy and the corresponding training time w.r.t . As is acknowledged, the computation time of KDE grows linearly with the sample size. While compared to KDE, the table shows that the computation time of our BRDF grows much slower than it. The above analysis illustrates the observation that when the sample size is , the training time of BRDF-OB is shorter than that of KDE. To mention, the training time can be further shortened if we employ the parallel computing. Moreover, our BRDF is much more resistant to the curse of dimensionality than any other strategies in comparison.

From a holistic perspective, both our BRDF-AP and BRDF-OB have shown significant advantages over other standard density estimators in real data analysis.

6 Proofs

To prove Proposition 4.1, we need the following result which follows from Lemma 6.2 in Devroye (1986).

For a binary search tree with nodes, denote the saturation level as the number of full levels of nodes in the tree. Then for and , there holds

[of Proposition 4.1] (i) Since the space of continuous and compactly supported functions is dense in , we can find such that


Since has a compact support, there exists a such that and . Moreover, is uniformly continuous, since it is continuous and is compact. This implies that there exists a such that if , then we have


We define by


where is the cell which falls in of the specific partition . Then, for any , (13) implies that


If , then we have . Moreover, if , then

Otherwise if , then by the definition. Therefore, we obtain

For with , there holds

In the following proof, in order to describe the randomness of the partition, we should first give the definition of the diameter of a cell by , where denotes the length of the -th dimension of a rectangle cell . Then by Markov’s inequality, we obtain