An Exponential Efron-Stein Inequality for Lq Stable Learning Rules

03/12/2019
by   Karim Abou-Moustafa, et al.
0

There is accumulating evidence in the literature that stability of learning algorithms is a key characteristic that permits a learning algorithm to generalize. Despite various insightful results in this direction, there seems to be an overlooked dichotomy in the type of stability-based generalization bounds we have in the literature. On one hand, the literature seems to suggest that exponential generalization bounds for the estimated risk, which are optimal, can be only obtained through stringent, distribution independent and computationally intractable notions of stability such as uniform stability. On the other hand, it seems that weaker notions of stability such as hypothesis stability, although it is distribution dependent and more amenable to computation, can only yield polynomial generalization bounds for the estimated risk, which are suboptimal. In this paper, we address the gap between these two regimes of results. In particular, the main question we address here is whether it is possible to derive exponential generalization bounds for the estimated risk using a notion of stability that is computationally tractable and distribution dependent, but weaker than uniform stability. Using recent advances in concentration inequalities, and using a notion of stability that is weaker than uniform stability but distribution dependent and amenable to computation, we derive an exponential tail bound for the concentration of the estimated risk of a hypothesis returned by a general learning rule, where the estimated risk is expressed in terms of either the resubstitution estimate (empirical error), or the deleted (or, leave-one-out) estimate. As an illustration, we derive exponential tail bounds for ridge regression with unbounded responses -- a setting where uniform stability results of Bousquet and Elisseeff (2002) are not applicable.

READ FULL TEXT
research
06/08/2022

Boosting the Confidence of Generalization for L_2-Stable Randomized Learning Algorithms

Exponential generalization bounds with near-tight rates have recently be...
research
06/19/2017

An a Priori Exponential Tail Bound for k-Folds Cross-Validation

We consider a priori generalization bounds developed in terms of cross-v...
research
08/23/2016

Stability revisited: new generalisation bounds for the Leave-one-Out

The present paper provides a new generic strategy leading to non-asympto...
research
08/22/2016

Uniform Generalization, Concentration, and Adaptive Learning

One fundamental goal in any learning algorithm is to mitigate its risk f...
research
06/11/2021

Towards Understanding Generalization via Decomposing Excess Risk Dynamics

Generalization is one of the critical issues in machine learning. Howeve...
research
05/25/2022

Transportation-Inequalities, Lyapunov Stability and Sampling for Dynamical Systems on Continuous State Space

We study the concentration phenomenon for discrete-time random dynamical...
research
03/25/2022

Generalization bounds for learning under graph-dependence: A survey

Traditional statistical learning theory relies on the assumption that da...

Please sign up or login with your details

Forgot password? Click here to reset