Algorithmic stability and hypothesis complexity

02/28/2017
by   Tongliang Liu, et al.
0

We introduce a notion of algorithmic stability of learning algorithms---that we term argument stability---that captures stability of the hypothesis output by the learning algorithm in the normed space of functions from which hypotheses are selected. The main result of the paper bounds the generalization error of any learning algorithm in terms of its argument stability. The bounds are based on martingale inequalities in the Banach space to which the hypotheses belong. We apply the general bounds to bound the performance of some learning algorithms based on empirical risk minimization and stochastic gradient descent.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/12/2012

Almost-everywhere algorithmic stability and generalization error

We explore in some detail the notion of algorithmic stability as a viabl...
research
03/24/2013

On Learnability, Complexity and Stability

We consider the fundamental question of learnability of a hypotheses cla...
research
02/07/2016

Ensemble Robustness and Generalization of Stochastic Deep Learning Algorithms

The question why deep learning algorithms generalize so well has attract...
research
02/25/2021

Machine Unlearning via Algorithmic Stability

We study the problem of machine unlearning and identify a notion of algo...
research
08/23/2016

Stability revisited: new generalisation bounds for the Leave-one-Out

The present paper provides a new generic strategy leading to non-asympto...
research
01/26/2019

Stacking and stability

Stacking is a general approach for combining multiple models toward grea...
research
05/07/2014

A Mathematical Theory of Learning

In this paper, a mathematical theory of learning is proposed that has ma...

Please sign up or login with your details

Forgot password? Click here to reset