In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors

12/09/2019
by   Jeffrey Negrea, et al.
11

We propose to study the generalization error of a learned predictor ĥ in terms of that of a surrogate (potentially randomized) classifier that is coupled to ĥ and designed to trade empirical risk for control of generalization error. In the case where ĥ interpolates the data, it is interesting to consider theoretical surrogate classifiers that are partially derandomized or rerandomized, e.g., fit to the training data but with modified label noise. We show that replacing ĥ by its conditional distribution with respect to an arbitrary σ-field is a viable method to derandomize. We give an example, inspired by the work of Nagarajan and Kolter (2019), where the learned classifier ĥ interpolates the training data with high probability, has small risk, and, yet, does not belong to a nonrandom class with a tight uniform bound on two-sided generalization error. At the same time, we bound the risk of ĥ in terms of a surrogate that is constructed by conditioning and shown to belong to a nonrandom class with uniformly small generalization error.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/24/2018

Generalization Bounds for Uniformly Stable Algorithms

Uniform stability of a learning algorithm is a classical notion of algor...
research
02/11/2018

On the Rates of Convergence from Surrogate Risk Minimizers to the Bayes Optimal Classifier

We study the rates of convergence from empirical surrogate risk minimize...
research
08/22/2016

Uniform Generalization, Concentration, and Adaptive Learning

One fundamental goal in any learning algorithm is to mitigate its risk f...
research
05/01/2021

RATT: Leveraging Unlabeled Data to Guarantee Generalization

To assess generalization, machine learning scientists typically either (...
research
10/21/2022

A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear Models

We prove a new generalization bound that shows for any class of linear p...
research
11/16/2021

Generalization Bounds and Algorithms for Learning to Communicate over Additive Noise Channels

An additive noise channel is considered, in which the distribution of th...
research
10/31/2020

Fair Classification with Group-Dependent Label Noise

This work examines how to train fair classifiers in settings where train...

Please sign up or login with your details

Forgot password? Click here to reset