A Tight Lower Bound for Uniformly Stable Algorithms

12/24/2020
by   Qinghua Liu, et al.
0

Leveraging algorithmic stability to derive sharp generalization bounds is a classic and powerful approach in learning theory. Since Vapnik and Chervonenkis [1974] first formalized the idea for analyzing SVMs, it has been utilized to study many fundamental learning algorithms (e.g., k-nearest neighbors [Rogers and Wagner, 1978], stochastic gradient method [Hardt et al., 2016], linear regression [Maurer, 2017], etc). In a recent line of great works by Feldman and Vondrak [2018, 2019] as well as Bousquet et al. [2020b], they prove a high probability generalization upper bound of order 𝒪(γ +L/√(n)) for any uniformly γ-stable algorithm and L-bounded loss function. Although much progress was achieved in proving generalization upper bounds for stable algorithms, our knowledge of lower bounds is rather limited. In fact, there is no nontrivial lower bound known ever since the study on uniform stability began [Bousquet and Elisseeff, 2002], to the best of our knowledge. In this paper we fill the gap by proving a tight generalization lower bound of order Ω(γ+L/√(n)), which matches the best known upper bound up to logarithmic factors

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset