Vote-boosting ensembles

06/30/2016
by   Maryam Sabzevari, et al.
0

Vote-boosting is a sequential ensemble learning method in which individual classifiers are built on different weighted versions of the training data. To build a new classifier, the weight of each training instance is determined as a function of the disagreement rate of the current ensemble predictions for that particular instance. Experiments using the symmetric beta distribution as the emphasis function and different base learners are used to illustrate the properties and to analyze the performance of these types of ensembles. In classification problems with low or no class-label noise, when simple base learners are used, vote-boosting behaves as if it were an interpolation between bagging and standard boosting (e.g. AdaBoost), depending on the value of the shape parameter of the beta distribution. In terms of predictive accuracy the best results, which are comparable or better than random forests, are obtained with vote-boosting ensembles of random trees.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/25/2015

Evasion and Hardening of Tree Ensemble Classifiers

Classifier evasion consists in finding for a given instance x the neares...
research
08/14/2019

Optimizing Ensemble Weights and Hyperparameters of Machine Learning Models for Regression Problems

Aggregating multiple learners through an ensemble of models aims to make...
research
02/21/2018

Pooling homogeneous ensembles to build heterogeneous ensembles

In ensemble methods, the outputs of a collection of diverse classifiers ...
research
01/16/2020

Better Boosting with Bandits for Online Learning

Probability estimates generated by boosting ensembles are poorly calibra...
research
11/04/2020

Residual Likelihood Forests

This paper presents a novel ensemble learning approach called Residual L...
research
07/09/2023

Ensemble learning for blending gridded satellite and gauge-measured precipitation data

Regression algorithms are regularly used for improving the accuracy of s...
research
04/26/2018

Decentralized learning with budgeted network load using Gaussian copulas and classifier ensembles

We examine a network of learners which address the same classification t...

Please sign up or login with your details

Forgot password? Click here to reset