Gradient Boosting Decision Tree (GBDT) are popular machine learning algorithms with implementations such as LightGBM and in popular machine learning toolkits like Scikit-Learn. Many implementations can only produce trees in an offline manner and in a greedy manner. We explore ways to convert existing GBDT implementations to known neural network architectures with minimal performance loss in order to allow decision splits to be updated in an online manner and provide extensions to allow splits points to be altered as a neural architecture search problem. We provide learning bounds for our neural network.READ FULL TEXT VIEW PDF
We show that Residual Networks (ResNet) is equivalent to boosting featur...
This work presents an approach to automatically induction for non-greedy...
Decision tree learning is a popular approach for classification and
A key problem in deep multi-attribute learning is to effectively discove...
Neural networks have proved to be very robust at processing unstructured...
The application to search ranking is one of the biggest machine learning...
Though traditional algorithms could be embedded into neural architecture...
Gradient boosting decision tree (GBDT) Friedman2001
is a widely-used machine learning algorithm, and has achieved state of the art performance in many machine learning tasks. With the recent rise of Deep Learning architectures which open the possibility of allowing all parameters to be updated simultaneously with gradient descent rather than splitting procedures, furthermore it promises to be scalable with mini-batch based learning and GPU acceleration with little effort.
In this paper, we present an neural network architecture which we call TreeGrad, based on Deep Neural Decision Forests dndf which enables boosted decision trees to be trained in an online manner; both in the nature of updating decision split values and the choice of split candidates. We demonstrate that TreeGrad achieves learning bounds previously described by cortes17a and demonstrate the efficacy of TreeGrad approach by presenting competitive benchmarks to leading GBDT implementations.
Deep Neural Decision Forests dndf demonstrates how neural decision tree forests can replace a fully connected layer using stochastic and differential decision trees which assume the node split structure fixed and the node split is learned.
TreeGrad is a simple extension of Deep Neural Decision Forests, which treats the node split structure to be a neural network architecture search problem; whilst enforcing neural network compression approaches to render our decision trees to be more interpretable through creating axis-parallel splits.
Consider a binary classification problem with input and output spaces given by and , respectively. A decision tree
is a tree-structured classifier consisting ofdecision nodes and prediction (or leaf) nodes. A decision stump is a machine learning model which consists of a single decision (or split) and prediction (or leaf) nodes corresponding to the split, and is used by decision nodes to determine how each sample is routed along the tree. A decision stump consists of a decision function , which is parameterized by which is responsible for routing the sample to the subsequent nodes.
In this paper we will consider only decision functions which are binary. Typically, in decision tree and tree ensemble models the routing is deterministic, in this paper we will approximate deterministic routing through the use of the Concrete distribution (or the Gumbel-Softmax trick) gumbel_softmax1 gumbel_softmax2 , whereby the routing direction is the output of a Concrete distribution; that is we consider softmax map indexed by temperature parameter , which is given by
. This approach differs from how decision functions are composed by “Deep Neural Decision Forest” which uses Bernoulli random variables and use probabilistic routing for their decision functionsdndf . Once a sample reaches a leaf node , the related tree prediction is given
, which represents the output of a binary classification problem. In this case, as the routings are not purely deterministic, the leaf predictions will weighted by the by the probability of reaching the leaf. The predictions from the decision stump is then parametrized as
where , representing our binary classification problem and , representing the set of leaves corresponding to the binary class predictions.
In many implementations of decision trees, the decision node is determined using axis-parallel split; whereby the split is determined based on a comparison with a single valuemurthy1994system . In this paper we are interested both axis-parallel and oblique splits for the decision function. More specifically, we’re interested in the creation of axis-parallel from an oblique split.
To create an oblique split, we assume that the decision function is a linear classifier function, i.e. , where is parameterized by the linear coefficients and the intercept and belongs to the class of logistic functions, which include sigmoid and softmax variants. In a similar way an axis-parallel split is create through a linear function , with the additional constraint that the norm (defined as , where is the indicator function) of is 1, i.e. .
If we interpret the decision function to be a model selection process, then model selection approaches can be used to determine the ideal model, and hence axis-parallel split for the decision function. A simple approach is to use a stacking model. Stacking models are an ensemble model in the form , for set of real weights Wolpert1992 Breiman1996 .
From this formulation, we can either choose the best model and create an axis-parallel split, or leave the stacking model which will result in an oblique split. This demonstrates the ability for our algorithm to convert oblique splits to axis-parallel splits for our decision stumps which is also automatically differentiable, which can allow non-greedy decision trees to be created. In the scenario that the best model is preferred, approaches like straight-through Gumbel-Softmax gumbel_softmax1
can be applied: for the forward pass, we sample a one-hot vector using Gumbel-Max trick, while for the backward pass, we use Gumbel-Softmax to compute the gradient. This approach is analogous to neural network compression algorithms which aim to aggressively prune parameters at a particular threshold; whereby the threshold chosen is to ensure that each decision boundary contains only one single parameter with all other parameters set to.
Extending decision nodes to decision trees has been discussed by dndf . We denote the output of node to be , which is then routed along a pre-determined path to the subsequent nodes. When the sample reaches the leaf node
, the tree prediction will be given by a learned value of the leaf node. In some implementations it is the raw number of observations seen in each class (scikit-learn), in other implementations it is the estimated log-odds (LightGBM, XGBoost). As the routings are probabilistic in nature, the leaf predictions can be averaged by the probability of reaching the leaf, as done indndf , or through the usage of the Gumbel straight through trick gumbel_softmax1 .
To provide an explicit form for routing within a decision tree, we observe that routes in a decision tree are fixed and pre-determined. We introduce a routing matrix which is a binary matrix which describes the relationship between the nodes and the leaves. If there are nodes and leaves, then , where the rows of represents the presence of each binary decision of the nodes for the corresponding leaf .
We define the matrix containing the routing probability of all nodes to be . We construct this so that for each node , we concatenate each decision stump route probability , where is the matrix concatenation operation, and indicate the probability of moving to the positive route and negative route of node respectively. We can now combine matrix and to express as follows:
where represents the binary vector for leaf . This is interpreted as the product pooling the nodes used to route to a particular leaf . Accordingly, the final prediction for sample from the tree with decision nodes parameterized by is given by
Where represents the parameters denoting the leaf node values, and is the routing function which provides the probability that the sample will reach leaf , i.e. for all . The matrix is the routing matrix which describes which node is used for each leaf in the tree.
Next we demonstrate that a decision tree is a neural network has sets of layers that belongs to family of artificial neural networks defined by cortes17a . The size of these layers are based on a predetermined number of nodes with a corresponding number of leaves . Let the input space be and for any , let denote the corresponding feature vector.
The first layer is decision node layer. This is defined by trainable parameters , with and . Define and , which represent the positive and negative routes of each node. Then the output of the first layer is .
The next is the probability routing layer, which are all untrainable, and are a predetermined binary matrix as defined in Section 3. We define the activation function to be . Then the output of the second layer is . As is 1-Lipschitz bounded function in the domain and the range of , then by extension, is a 1-Lipschitz bounded function for . As is a binary matrix, then the output of must also be in the range .
The final output layer is the leaf layer, this is a fully connected layer to the previous layer, which is defined by parameter , which represents the number of leaves. The activation function is defined to be . The the output of the last layer is defined to be . Since has range , then is a 1-Lipschitz bounded function as is 1-Lipschitz bounded in the domain .
This formulation is equivalent to the above formulation, as the product pooling operator . As each activation function is 1-Lipschitz functions, then our decision tree neural network belongs to the same family of artificial neural networks defined by cortes17a , and thus our decision trees have the corresponding learning bounds related to AdaNet.
The number of trainable parameters () in our decision tree implementation is . If we include stacking weights in our model, then the number of parameters increase to , if we alter the network to axis-parallel splits, then the parameters reduce to . More importantly, if our decision tree implementation is sparsified, then the number of trainable parameters does not depend on the number of features in the first feature layer; instead it depends only on the number of nodes in the model.
Our implementation of decision trees is straightforward and can be implemented using auto-differentiation frameworks with as few as ten lines of code. Our approach has been implemented using Autograd as a starting point and in theory can be moved to a GPU enabled framework.
Methods to seamless move between oblique splits and axis-parallel splits would be to introduce Gumbel-trick to the model. One could choose to keep the parameters in the model, rather than taking them out. The inability to grow or prune nodes is a deficiency in our implementation compared with off-the-shelf decison tree models which ca easily do this readily. Growing or pruning decision trees would be an architectural selection problem and not necessarily a problem related to the training of weights.
The natural extension to building decision trees is boosting decision trees. To that end, AdaNet algorithm cortes17a can be used combine and boost multiple decision trees, another approach is to train models in an offline manner, in the same manner which Boosting algorithms are implemented when the models cannot be updated in an online manner.
Our experiments explore three different components to grow boosted trees as neural networks. We examine the three components from stumps, trees and boosted trees over a range of benchmark datasets to demonstrate the level of agreement with competing implementations in scikit-learn and LightGBM.
We perform experiments on a combination of benchmark classification datasets from the UCI repository to compare our non-greedy decision tree ensemble using neural networks (TreeGrad) against other popular implementations in LightGBM (LGM) and Scikit-Learn Gradient Boosted Trees (GBT).
Our TreeGrad is based on a two stage process. First, constructing a tree where the decision boundaries are oblique boundaries. Next, sparsifying the neural network with axis-parallel boundaries and fine tuning the decision tree.
In each application of TreeGrad, our models may only copy the structure of decision trees, but we will always reset the subsequent weights.
We consider the usage of regularizer combined with regularizer in a manner described by louizos2017learning . We found sparsifying neural networks pre-emptively using the regularizer enabled minmal loss in performance after neural networks were compressed to produce axis-parallel
splits. All trees were grown with the same hyperparameters with maximum number of leaves being set to. The base architecture chosen for TreeGrad algorithm was determined by LightGBM, where the results shown below are when all weights are re-initialise to random values.
|Number of wins||4||1||2|
|Mean Reciprocal Rank||0.762||0.452||0.619|
In the models which TreeGrad had low agreement on the feature importance metrics were the models in which TreeGrad performed best. This suggests that TreeGrad was able to find combination of features and their interactions which were not recovered when the decision trees were grown in a greedy fashion. It would appear from these results that TreeGrad may have an advantage when training a single tree where there are constraints on the number of leafs or depth.
To compare the performance for Boosted Trees, we use 100 trees grown using Scikit-learn and also LightGBM all with maximum number of leaves of 32.
As in the other experiments, we use and regularizer to sparsify the node layer first, before applying regularizer to the rest of the network, TreeGrad trees in this scenario all have the same structure with the weights randomly initialised.
|Number of wins||4||3||1|
|Mean Reciprocal Rank||0.762||0.714||0.429|
Again, we observe that features importance with low correlation have TreeGrad outperforming counterparts, which suggests that non-greedy approach can find different relationships to the greedy counterparts. In terms of the results, it is much less clear cut based on this sample of datasets, though it does appear that TreeGrad is superior to GBT models.
In the previous section, TreeGrad were trained in a sequential manner, and not in an end-to-end fashion. If we have a desire to train the neural networkin an end-to-end fashion, it will incur greater computation cost, as all parameters will need to be updated simulatenously rather than only updating part of a network at a time. We repeat the experiment with identical networks; one trained end-to-end and the other trained sequentially only.
|Dataset||TreeGrad (Sequential)||TreeGrad (End to End)|
When we compare the results, we observe that there is some difference in performance when we train all trees concurrently versus training them in a sequential manner; though these differences would make only minor differences to the mean reciprocal rank when comparing with LGM and GBT models.
We have demonstrated approaches to unify boosted tree models and neural networks, allowing tree models to be transferred to neural network structures. We have provided an approach to rebuild trees in a non-greedy manner and decision splits in the scenario were weights are reset and provided learning bounds for this approach. This approach is demonstrated to be competitive with current tree ensemble algorithms, and empirically better than popular frameworks in Scikit-learn.
Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pages 4190–4194, 2016.
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables.In International Conference on Learning Representations, 2018.