Pruning Random Forests for Prediction on a Budget

06/16/2016
by   Feng Nan, et al.
0

We propose to prune a random forest (RF) for resource-constrained prediction. We first construct a RF and then prune it to optimize expected feature cost & accuracy. We pose pruning RFs as a novel 0-1 integer program with linear constraints that encourages feature re-use. We establish total unimodularity of the constraint set to prove that the corresponding LP relaxation solves the original integer program. We then exploit connections to combinatorial optimization and develop an efficient primal-dual algorithm, scalable to large datasets. In contrast to our bottom-up approach, which benefits from good RF initialization, conventional methods are top-down acquiring features based on their utility value and is generally intractable, requiring heuristics. Empirically, our pruning algorithm outperforms existing state-of-the-art resource-constrained algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/05/2016

Optimally Pruning Decision Tree Ensembles With Feature Cost

We consider the problem of learning decision rules for prediction with f...
research
10/19/2021

Improving the Accuracy-Memory Trade-Off of Random Forests Via Leaf-Refinement

Random Forests (RF) are among the state-of-the-art in many machine learn...
research
02/20/2015

Feature-Budgeted Random Forest

We seek decision rules for prediction-time cost reduction, where complet...
research
06/25/2019

AMF: Aggregated Mondrian Forests for Online Learning

Random Forests (RF) is one of the algorithms of choice in many supervise...
research
06/10/2015

Randomer Forests

Random forests (RF) is a popular general purpose classifier that has bee...
research
07/27/2021

COPS: Controlled Pruning Before Training Starts

State-of-the-art deep neural network (DNN) pruning techniques, applied o...

Please sign up or login with your details

Forgot password? Click here to reset