One Class Splitting Criteria for Random Forests

11/07/2016
by   Nicolas Goix, et al.
0

Random Forests (RFs) are strong machine learning tools for classification and regression. However, they remain supervised algorithms, and no extension of RFs to the one-class setting has been proposed, except for techniques based on second-class sampling. This work fills this gap by proposing a natural methodology to extend standard splitting criteria to the one-class setting, structurally generalizing RFs to one-class classification. An extensive benchmark of seven state-of-the-art anomaly detection algorithms is also presented. This empirically demonstrates the relevance of our approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/03/2020

Aleatoric and Epistemic Uncertainty with Random Forests

Due to the steadily increasing relevance of machine learning for practic...
research
12/19/2013

Learning Transformations for Classification Forests

This work introduces a transformation-based learner model for classifica...
research
04/19/2022

"Flux+Mutability": A Conditional Generative Approach to One-Class Classification and Anomaly Detection

Anomaly Detection is becoming increasingly popular within the experiment...
research
02/07/2012

Information Forests

We describe Information Forests, an approach to classification that gene...
research
08/17/2020

Stochastic Optimization Forests

We study conditional stochastic optimization problems, where we leverage...
research
07/05/2016

How to Evaluate the Quality of Unsupervised Anomaly Detection Algorithms?

When sufficient labeled data are available, classical criteria based on ...
research
03/18/2019

Galaxy classification: A machine learning analysis of GAMA catalogue data

We present a machine learning analysis of five labelled galaxy catalogue...

Please sign up or login with your details

Forgot password? Click here to reset