Feature-Budgeted Random Forest

02/20/2015
by   Feng Nan, et al.
0

We seek decision rules for prediction-time cost reduction, where complete data is available for training, but during prediction-time, each feature can only be acquired for an additional cost. We propose a novel random forest algorithm to minimize prediction error for a user-specified average feature acquisition budget. While random forests yield strong generalization performance, they do not explicitly account for feature costs and furthermore require low correlation among trees, which amplifies costs. Our random forest grows trees with low acquisition cost and high strength based on greedy minimax cost-weighted-impurity splits. Theoretically, we establish near-optimal acquisition cost guarantees for our algorithm. Empirically, on a number of benchmark datasets we demonstrate superior accuracy-cost curves against state-of-the-art prediction-time algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/25/2019

Neural Random Forest Imitation

We present Neural Random Forest Imitation - a novel approach for transfo...
research
05/20/2012

Multi-Stage Classifier Design

In many classification systems, sensing modalities have different acquis...
research
06/29/2017

Generalising Random Forest Parameter Optimisation to Include Stability and Cost

Random forests are among the most popular classification and regression ...
research
06/16/2016

Pruning Random Forests for Prediction on a Budget

We propose to prune a random forest (RF) for resource-constrained predic...
research
08/22/2022

MetaRF: Differentiable Random Forest for Reaction Yield Prediction with a Few Trails

Artificial intelligence has deeply revolutionized the field of medicinal...
research
09/30/2020

Uncovering Feature Interdependencies in Complex Systems with Non-Greedy Random Forests

A "non-greedy" variation of the random forest algorithm is presented to ...

Please sign up or login with your details

Forgot password? Click here to reset