Towards Robust Classification with Deep Generative Forests

07/11/2020
by   Alvaro H. C. Correia, et al.
0

Decision Trees and Random Forests are among the most widely used machine learning models, and often achieve state-of-the-art performance in tabular, domain-agnostic datasets. Nonetheless, being primarily discriminative models they lack principled methods to manipulate the uncertainty of predictions. In this paper, we exploit Generative Forests (GeFs), a recent class of deep probabilistic models that addresses these issues by extending Random Forests to generative models representing the full joint distribution over the feature space. We demonstrate that GeFs are uncertainty-aware classifiers, capable of measuring the robustness of each prediction as well as detecting out-of-distribution samples.

READ FULL TEXT
06/25/2020

Joints in Random Forests

Decision Trees (DTs) and Random Forests (RFs) are powerful discriminativ...
09/25/2019

Manifold Forests: Closing the Gap on Neural Networks

Decision forests (DF), in particular random forests and gradient boostin...
02/07/2012

Information Forests

We describe Information Forests, an approach to classification that gene...
12/02/2021

RafterNet: Probabilistic predictions in multi-response regression

A fully nonparametric approach for making probabilistic predictions in m...
08/19/2019

SIRUS: making random forests interpretable

State-of-the-art learning algorithms, such as random forests or neural n...
11/11/2019

Simplifying Random Forests: On the Trade-off between Interpretability and Accuracy

We analyze the trade-off between model complexity and accuracy for rando...
08/19/2022

Demystifying Randomly Initialized Networks for Evaluating Generative Models

Evaluation of generative models is mostly based on the comparison betwee...