Stability revisited: new generalisation bounds for the Leave-one-Out

08/23/2016
by   Alain Celisse, et al.
0

The present paper provides a new generic strategy leading to non-asymptotic theoretical guarantees on the Leave-one-Out procedure applied to a broad class of learning algorithms. This strategy relies on two main ingredients: the new notion of L^q stability, and the strong use of moment inequalities. L^q stability extends the ongoing notion of hypothesis stability while remaining weaker than the uniform stability. It leads to new PAC exponential generalisation bounds for Leave-one-Out under mild assumptions. In the literature, such bounds are available only for uniform stable algorithms under boundedness for instance. Our generic strategy is applied to the Ridge regression algorithm as a first step.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/12/2019

An Exponential Efron-Stein Inequality for Lq Stable Learning Rules

There is accumulating evidence in the literature that stability of learn...
research
06/08/2022

Boosting the Confidence of Generalization for L_2-Stable Randomized Learning Algorithms

Exponential generalization bounds with near-tight rates have recently be...
research
02/28/2017

Algorithmic stability and hypothesis complexity

We introduce a notion of algorithmic stability of learning algorithms---...
research
12/12/2012

Almost-everywhere algorithmic stability and generalization error

We explore in some detail the notion of algorithmic stability as a viabl...
research
08/16/2011

Stability Conditions for Online Learnability

Stability is a general notion that quantifies the sensitivity of a learn...
research
01/26/2019

Stacking and stability

Stacking is a general approach for combining multiple models toward grea...
research
11/25/2021

Multi-fidelity Stability for Graph Representation Learning

In the problem of structured prediction with graph representation learni...

Please sign up or login with your details

Forgot password? Click here to reset