Stacking and stability

01/26/2019
by   Nino Arsov, et al.
0

Stacking is a general approach for combining multiple models toward greater predictive accuracy. It has found various application across different domains, ensuing from its meta-learning nature. Our understanding, nevertheless, on how and why stacking works remains intuitive and lacking in theoretical insight. In this paper, we use the stability of learning algorithms as an elemental analysis framework suitable for addressing the issue. To this end, we analyze the hypothesis stability of stacking, bag-stacking, and dag-stacking and establish a connection between bag-stacking and weighted bagging. We show that the hypothesis stability of stacking is a product of the hypothesis stability of each of the base models and the combiner. Moreover, in bag-stacking and dag-stacking, the hypothesis stability depends on the sampling strategy used to generate the training set replicates. Our findings suggest that 1) subsampling and bootstrap sampling improve the stability of stacking, and 2) stacking improves the stability of both subbagging and bagging.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2019

Hypothesis Set Stability and Generalization

We present an extensive study of generalization for data-dependent hypot...
research
03/03/2019

Stability of decision trees and logistic regression

Decision trees and logistic regression are one of the most popular and w...
research
02/28/2017

Algorithmic stability and hypothesis complexity

We introduce a notion of algorithmic stability of learning algorithms---...
research
03/24/2013

On Learnability, Complexity and Stability

We consider the fundamental question of learnability of a hypotheses cla...
research
08/23/2016

Stability revisited: new generalisation bounds for the Leave-one-Out

The present paper provides a new generic strategy leading to non-asympto...
research
05/31/2023

Hypothesis Transfer Learning with Surrogate Classification Losses

Hypothesis transfer learning (HTL) contrasts domain adaptation by allowi...
research
06/12/2020

Learning from Label Proportions: A Mutual Contamination Framework

Learning from label proportions (LLP) is a weakly supervised setting for...

Please sign up or login with your details

Forgot password? Click here to reset