Generalization by Recognizing Confusion

06/13/2020
by   Daniel Chiu, et al.
0

A recently-proposed technique called self-adaptive training augments modern neural networks by allowing them to adjust training labels on the fly, to avoid overfitting to samples that may be mislabeled or otherwise non-representative. By combining the self-adaptive objective with mixup, we further improve the accuracy of self-adaptive models for image recognition; the resulting classifier obtains state-of-the-art accuracies on datasets corrupted with label noise. Robustness to label noise implies a lower generalization gap; thus, our approach also leads to improved generalizability. We find evidence that the Rademacher complexity of these algorithms is low, suggesting a new path towards provable generalization for this type of deep learning model. Last, we highlight a novel connection between difficulties accounting for rare classes and robustness under noise, as rare classes are in a sense indistinguishable from label noise. Our code can be found at https://github.com/Tuxianeer/generalizationconfusion.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/13/2020

Improving the Classification of Rare Chords with Unlabeled Data

In this work, we explore techniques to improve performance for rare clas...
research
05/02/2022

SELC: Self-Ensemble Label Correction Improves Learning with Noisy Labels

Deep neural networks are prone to overfitting noisy labels, resulting in...
research
05/07/2021

Self-paced Resistance Learning against Overfitting on Noisy Labels

Noisy labels composed of correct and corrupted ones are pervasive in pra...
research
06/29/2022

Adversarial Ensemble Training by Jointly Learning Label Dependencies and Member Models

Training an ensemble of different sub-models has empirically proven to b...
research
04/27/2023

Self-discipline on multiple channels

Self-distillation relies on its own information to improve the generaliz...
research
10/16/2020

For self-supervised learning, Rationality implies generalization, provably

We prove a new upper bound on the generalization gap of classifiers that...
research
04/28/2021

Boosting Co-teaching with Compression Regularization for Label Noise

In this paper, we study the problem of learning image classification mod...

Please sign up or login with your details

Forgot password? Click here to reset