RATT: Leveraging Unlabeled Data to Guarantee Generalization

05/01/2021
by   Saurabh Garg, et al.
0

To assess generalization, machine learning scientists typically either (i) bound the generalization gap and then (after training) plug in the empirical risk to obtain a bound on the true risk; or (ii) validate empirically on holdout data. However, (i) typically yields vacuous guarantees for overparameterized models. Furthermore, (ii) shrinks the training set and its guarantee erodes with each re-use of the holdout set. In this paper, we introduce a method that leverages unlabeled data to produce generalization bounds. After augmenting our (labeled) training set with randomly labeled fresh examples, we train in the standard fashion. Whenever classifiers achieve low error on clean data and high error on noisy data, our bound provides a tight upper bound on the true risk. We prove that our bound is valid for 0-1 empirical risk minimization and with linear classifiers trained by gradient descent. Our approach is especially useful in conjunction with deep learning due to the early learning phenomenon whereby networks fit true labels before noisy labels but requires one intuitive assumption. Empirically, on canonical computer vision and NLP tasks, our bound provides non-vacuous generalization guarantees that track actual performance closely. This work provides practitioners with an option for certifying the generalization of deep nets even when unseen labeled data is unavailable and provides theoretical insights into the relationship between random label noise and generalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2022

Why pseudo label based algorithm is effective? –from the perspective of pseudo labeled data

Recently, pseudo label based semi-supervised learning has achieved great...
research
12/08/2022

Leveraging Unlabeled Data to Track Memorization

Deep neural networks may easily memorize noisy labels present in real-wo...
research
02/26/2021

On the Generalization of Stochastic Gradient Descent with Momentum

While momentum-based methods, in conjunction with stochastic gradient de...
research
07/17/2023

Soft Curriculum for Learning Conditional GANs with Noisy-Labeled and Uncurated Unlabeled Data

Label-noise or curated unlabeled data is used to compensate for the assu...
research
12/09/2019

In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors

We propose to study the generalization error of a learned predictor ĥ in...
research
06/03/2019

Adversarially Robust Generalization Just Requires More Unlabeled Data

Neural network robustness has recently been highlighted by the existence...
research
05/27/2021

Towards Understanding Knowledge Distillation

Knowledge distillation, i.e., one classifier being trained on the output...

Please sign up or login with your details

Forgot password? Click here to reset