Boosting as a Product of Experts

02/14/2012
by   Narayanan U. Edakunni, et al.
0

In this paper, we derive a novel probabilistic model of boosting as a Product of Experts. We re-derive the boosting algorithm as a greedy incremental model selection procedure which ensures that addition of new experts to the ensemble does not decrease the likelihood of the data. These learning rules lead to a generic boosting algorithm - POE- Boost which turns out to be similar to the AdaBoost algorithm under certain assumptions on the expert probabilities. The paper then extends the POEBoost algorithm to POEBoost.CS which handles hypothesis that produce probabilistic predictions. This new algorithm is shown to have better generalization performance compared to other state of the art algorithms.

READ FULL TEXT
research
10/29/2019

Minimal Variance Sampling in Stochastic Gradient Boosting

Stochastic Gradient Boosting (SGB) is a widely used approach to regulari...
research
10/09/2015

Functional Frank-Wolfe Boosting for General Loss Functions

Boosting is a generic learning method for classification and regression....
research
08/11/2017

Deep Incremental Boosting

This paper introduces Deep Incremental Boosting, a new technique derived...
research
05/31/2000

Distorted English Alphabet Identification : An application of Difference Boosting Algorithm

The difference-boosting algorithm is used on letters dataset from the UC...
research
06/08/2021

A Bagging and Boosting Based Convexly Combined Optimum Mixture Probabilistic Model

Unlike previous studies on mixture distributions, a bagging and boosting...
research
07/20/2018

Boosting algorithms for uplift modeling

Uplift modeling is an area of machine learning which aims at predicting ...
research
06/16/2020

Model Agnostic Combination for Ensemble Learning

Ensemble of models is well known to improve single model performance. We...

Please sign up or login with your details

Forgot password? Click here to reset