# Generalization Bounds for Uniformly Stable Algorithms

Uniform stability of a learning algorithm is a classical notion of algorithmic stability introduced to derive high-probability bounds on the generalization error (Bousquet and Elisseeff, 2002). Specifically, for a loss function with range bounded in [0,1], the generalization error of γ-uniformly stable learning algorithm on n samples is known to be at most O((γ +1/n) √(n (1/δ))) with probability at least 1-δ. Unfortunately, this bound does not lead to meaningful generalization bounds in many common settings where γ≥ 1/√(n). At the same time the bound is known to be tight only when γ = O(1/n). Here we prove substantially stronger generalization bounds for uniformly stable algorithms without any additional assumptions. First, we show that the generalization error in this setting is at most O(√((γ + 1/n) (1/δ))) with probability at least 1-δ. In addition, we prove a tight bound of O(γ^2 + 1/n) on the second moment of the generalization error. The best previous bound on the second moment of the generalization error is O(γ + 1/n). Our proofs are based on new analysis techniques and our results imply substantially stronger generalization guarantees for several well-studied algorithms.

## Authors

• 29 publications
• 8 publications
• ### High probability generalization bounds for uniformly stable algorithms with nearly optimal rate

Algorithmic stability is a classical approach to understanding and analy...
02/27/2019 ∙ by Vitaly Feldman, et al. ∙ 2

• ### A Tight Lower Bound for Uniformly Stable Algorithms

Leveraging algorithmic stability to derive sharp generalization bounds i...
12/24/2020 ∙ by Qinghua Liu, et al. ∙ 0

• ### Sharper bounds for uniformly stable algorithms

The generalization bounds for stable algorithms is a classical question ...
10/17/2019 ∙ by Olivier Bousquet, et al. ∙ 0

• ### Uniform Generalization, Concentration, and Adaptive Learning

One fundamental goal in any learning algorithm is to mitigate its risk f...
08/22/2016 ∙ by Ibrahim Alabdulmohsin, et al. ∙ 0

• ### In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors

We propose to study the generalization error of a learned predictor ĥ in...
12/09/2019 ∙ by Jeffrey Negrea, et al. ∙ 11

• ### Toward Better Generalization Bounds with Locally Elastic Stability

Classical approaches in learning theory are often seen to yield very loo...
10/27/2020 ∙ by Zhun Deng, et al. ∙ 0