DeepAI AI Chat
Log In Sign Up

Learning Not to Learn in the Presence of Noisy Labels

by   Liu Ziyin, et al.

Learning in the presence of label noise is a challenging yet important task: it is crucial to design models that are robust in the presence of mislabeled datasets. In this paper, we discover that a new class of loss functions called the gambler's loss provides strong robustness to label noise across various levels of corruption. We show that training with this loss function encourages the model to "abstain" from learning on the data points with noisy labels, resulting in a simple and effective method to improve robustness and generalization. In addition, we propose two practical extensions of the method: 1) an analytical early stopping criterion to approximately stop training before the memorization of noisy labels, as well as 2) a heuristic for setting hyperparameters which do not require knowledge of the noise corruption rate. We demonstrate the effectiveness of our method by achieving strong results across three image and text classification tasks as compared to existing baselines.


page 1

page 2

page 3

page 4


Combating Label Noise in Deep Learning Using Abstention

We introduce a novel method to combat label noise when training deep neu...

Logistic-Normal Likelihoods for Heteroscedastic Label Noise in Classification

A natural way of estimating heteroscedastic label noise in regression is...

Searching for Robustness: Loss Learning for Noisy Classification Tasks

We present a "learning to learn" approach for automatically constructing...

Combining Distance to Class Centroids and Outlier Discounting for Improved Learning with Noisy Labels

In this paper, we propose a new approach for addressing the challenge of...

The Fisher-Rao Loss for Learning under Label Noise

Choosing a suitable loss function is essential when learning by empirica...

PADDLES: Phase-Amplitude Spectrum Disentangled Early Stopping for Learning with Noisy Labels

Convolutional Neural Networks (CNNs) have demonstrated superiority in le...

Is BERT Robust to Label Noise? A Study on Learning with Noisy Labels in Text Classification

Incorrect labels in training data occur when human annotators make mista...