rFerns: An Implementation of the Random Ferns Method for General-Purpose Machine Learning

02/06/2012
by   Miron B. Kursa, et al.
0

In this paper I present an extended implementation of the Random ferns algorithm contained in the R package rFerns. It differs from the original by the ability of consuming categorical and numerical attributes instead of only binary ones. Also, instead of using simple attribute subspace ensemble it employs bagging and thus produce error approximation and variable importance measure modelled after Random forest algorithm. I also present benchmarks' results which show that although Random ferns' accuracy is mostly smaller than achieved by Random forest, its speed and good quality of importance measure it provides make rFerns a reasonable choice for a specific applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/25/2015

Prediction Error Reduction Function as a Variable Importance Score

This paper introduces and develops a novel variable importance score fun...
research
08/03/2019

The Use of Binary Choice Forests to Model and Estimate Discrete Choice Models

We show the equivalence of discrete choice models and the class of binar...
research
02/14/2016

Random Forest Based Approach for Concept Drift Handling

Concept drift has potential in smart grid analysis because the socio-eco...
research
08/27/2016

Random Forest for Label Ranking

Label ranking aims to learn a mapping from instances to rankings over a ...
research
11/18/2015

A Random Forest Guided Tour

The random forest algorithm, proposed by L. Breiman in 2001, has been ex...
research
03/08/2021

Forest Guided Smoothing

We use the output of a random forest to define a family of local smoothe...
research
03/07/2020

Getting Better from Worse: Augmented Bagging and a Cautionary Tale of Variable Importance

As the size, complexity, and availability of data continues to grow, sci...

Please sign up or login with your details

Forgot password? Click here to reset