
Generalization Bounds for Uniformly Stable Algorithms
Uniform stability of a learning algorithm is a classical notion of algor...
read it

On the Rates of Convergence from Surrogate Risk Minimizers to the Bayes Optimal Classifier
We study the rates of convergence from empirical surrogate risk minimize...
read it

Uniform Generalization, Concentration, and Adaptive Learning
One fundamental goal in any learning algorithm is to mitigate its risk f...
read it

RATT: Leveraging Unlabeled Data to Guarantee Generalization
To assess generalization, machine learning scientists typically either (...
read it

Asymptotic optimality and minimal complexity of classification by random projection
The generalization error of a classifier is related to the complexity of...
read it

Convex Risk Minimization and Conditional Probability Estimation
This paper proves, in very general settings, that convex risk minimizati...
read it

Uniform convergence may be unable to explain generalization in deep learning
We cast doubt on the power of uniform convergencebased generalization b...
read it
In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors
We propose to study the generalization error of a learned predictor ĥ in terms of that of a surrogate (potentially randomized) classifier that is coupled to ĥ and designed to trade empirical risk for control of generalization error. In the case where ĥ interpolates the data, it is interesting to consider theoretical surrogate classifiers that are partially derandomized or rerandomized, e.g., fit to the training data but with modified label noise. We show that replacing ĥ by its conditional distribution with respect to an arbitrary σfield is a viable method to derandomize. We give an example, inspired by the work of Nagarajan and Kolter (2019), where the learned classifier ĥ interpolates the training data with high probability, has small risk, and, yet, does not belong to a nonrandom class with a tight uniform bound on twosided generalization error. At the same time, we bound the risk of ĥ in terms of a surrogate that is constructed by conditioning and shown to belong to a nonrandom class with uniformly small generalization error.
READ FULL TEXT
Comments
There are no comments yet.