Deep Incremental Boosting

08/11/2017
by   Alan Mosca, et al.
0

This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep Incremental Boosting brings to traditional Ensemble methods in Deep Learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/23/2021

DILF-EN framework for Class-Incremental Learning

Deep learning models suffer from catastrophic forgetting of the classes ...
research
11/13/2014

SelfieBoost: A Boosting Algorithm for Deep Learning

We describe and analyze a new boosting algorithm for deep learning calle...
research
04/30/2021

Eliminating Multicollinearity Issues in Neural Network Ensembles: Incremental, Negatively Correlated, Optimal Convex Blending

Given a features, target dataset, we introduce an incremental algorithm ...
research
11/09/2016

Incremental Sequence Learning

Deep learning research over the past years has shown that by increasing ...
research
05/26/2015

Boosting-like Deep Learning For Pedestrian Detection

This paper proposes boosting-like deep learning (BDL) framework for pede...
research
02/14/2012

Boosting as a Product of Experts

In this paper, we derive a novel probabilistic model of boosting as a Pr...
research
08/06/2021

Incremental Feature Learning For Infinite Data

This study addresses the actual behavior of the credit-card fraud detect...

Please sign up or login with your details

Forgot password? Click here to reset