Log In Sign Up

Using dynamical quantization to perform split attempts in online tree regressors

A central aspect of online decision tree solutions is evaluating the incoming data and enabling model growth. For such, trees much deal with different kinds of input features and partition them to learn from the data. Numerical features are no exception, and they pose additional challenges compared to other kinds of features, as there is no trivial strategy to choose the best point to make a split decision. The problem is even more challenging in regression tasks because both the features and the target are continuous. Typical online solutions evaluate and store all the points monitored between split attempts, which goes against the constraints posed in real-time applications. In this paper, we introduce the Quantization Observer (QO), a simple yet effective hashing-based algorithm to monitor and evaluate split point candidates in numerical features for online tree regressors. QO can be easily integrated into incremental decision trees, such as Hoeffding Trees, and it has a monitoring cost of O(1) per instance and sub-linear cost to evaluate split candidates. Previous solutions had a O(log n) cost per insertion (in the best case) and a linear cost to evaluate split points. Our extensive experimental setup highlights QO's effectiveness in providing accurate split point suggestions while spending much less memory and processing time than its competitors.


page 1

page 2

page 3

page 4


Simple is better: Making Decision Trees faster using random sampling

In recent years, gradient boosted decision trees have become popular in ...

Online Multi-target regression trees with stacked leaf models

The amount of available data raises at large steps. Developing machine l...

End-to-end Learning of Deterministic Decision Trees

Conventional decision trees have a number of favorable properties, inclu...

An Eager Splitting Strategy for Online Decision Trees

We study the effectiveness of replacing the split strategy for the state...

Robust Decision Trees Against Adversarial Examples

Although adversarial examples and model robustness have been extensively...

Sparse learning with CART

Decision trees with binary splits are popularly constructed using Classi...