Boosting the Confidence of Generalization for L_2-Stable Randomized Learning Algorithms

06/08/2022
by   Xiao-Tong Yuan, et al.
10

Exponential generalization bounds with near-tight rates have recently been established for uniformly stable learning algorithms. The notion of uniform stability, however, is stringent in the sense that it is invariant to the data-generating distribution. Under the weaker and distribution dependent notions of stability such as hypothesis stability and L_2-stability, the literature suggests that only polynomial generalization bounds are possible in general cases. The present paper addresses this long standing tension between these two regimes of results and makes progress towards relaxing it inside a classic framework of confidence-boosting. To this end, we first establish an in-expectation first moment generalization error bound for potentially randomized learning algorithms with L_2-stability, based on which we then show that a properly designed subbagging process leads to near-tight exponential generalization bounds over the randomness of both data and algorithm. We further substantialize these generic results to stochastic gradient descent (SGD) to derive improved high-probability generalization bounds for convex or non-convex optimization problems with natural time decaying learning rates, which have not been possible to prove with the existing hypothesis stability or uniform stability based results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/12/2019

An Exponential Efron-Stein Inequality for Lq Stable Learning Rules

There is accumulating evidence in the literature that stability of learn...
research
11/25/2021

Multi-fidelity Stability for Graph Representation Learning

In the problem of structured prediction with graph representation learni...
research
08/23/2016

Stability revisited: new generalisation bounds for the Leave-one-Out

The present paper provides a new generic strategy leading to non-asympto...
research
08/22/2016

Uniform Generalization, Concentration, and Adaptive Learning

One fundamental goal in any learning algorithm is to mitigate its risk f...
research
12/24/2018

Generalization Bounds for Uniformly Stable Algorithms

Uniform stability of a learning algorithm is a classical notion of algor...
research
02/20/2023

Stability-based Generalization Analysis for Mixtures of Pointwise and Pairwise Learning

Recently, some mixture algorithms of pointwise and pairwise learning (PP...
research
02/20/2023

On the Stability and Generalization of Triplet Learning

Triplet learning, i.e. learning from triplet data, has attracted much at...

Please sign up or login with your details

Forgot password? Click here to reset