DART: Data Addition and Removal Trees

09/11/2020
by   Jonathan Brophy, et al.
0

How can we update data for a machine learning model after it has already trained on that data? In this paper, we introduce DART, a variant of random forests that supports adding and removing training data with minimal retraining. Data updates in DART are exact, meaning that adding or removing examples from a DART model yields exactly the same model as retraining from scratch on updated data. DART uses two techniques to make updates efficient. The first is to cache data statistics at each node and training data at each leaf, so that only the necessary subtrees are retrained. The second is to choose the split variable randomly at the upper levels of each tree, so that the choice is completely independent of the data and never needs to change. At the lower levels, split variables are chosen to greedily maximize a split criterion such as Gini index or mutual information. By adjusting the number of random-split levels, DART can trade off between more accurate predictions and more efficient updates. In experiments on ten real-world datasets and one synthetic dataset, we find that DART is orders of magnitude faster than retraining from scratch while sacrificing very little in terms of predictive performance.

READ FULL TEXT

page 5

page 13

research
04/18/2018

Exact Distributed Training: Random Forest with Billions of Examples

We introduce an exact distributed algorithm to train Random Forest model...
research
05/11/2016

Random forests for survival analysis using maximally selected rank statistics

The most popular approach for analyzing survival data is the Cox regress...
research
09/25/2018

A Gradient-Based Split Criterion for Highly Accurate and Transparent Model Trees

Machine learning algorithms aim at minimizing the number of false decisi...
research
02/26/2020

PrIU: A Provenance-Based Approach for Incrementally Updating Regression Models

The ubiquitous use of machine learning algorithms brings new challenges ...
research
10/06/2021

Data Twinning

In this work, we develop a method named Twinning, for partitioning a dat...
research
08/25/2021

Direct Nonparametric Predictive Inference Classification Trees

Classification is the task of assigning a new instance to one of a set o...
research
10/18/2018

Removing the influence of a group variable in high-dimensional predictive modelling

Predictive modelling relies on the assumption that observations used for...

Please sign up or login with your details

Forgot password? Click here to reset