DeepAI AI Chat
Log In Sign Up

Label Noise-Robust Learning using a Confidence-Based Sieving Strategy

by   Reihaneh Torkzadehmahani, et al.

In learning tasks with label noise, boosting model robustness against overfitting is a pivotal challenge because the model eventually memorizes labels including the noisy ones. Identifying the samples with corrupted labels and preventing the model from learning them is a promising approach to address this challenge. Per-sample training loss is a previously studied metric that considers samples with small loss as clean samples on which the model should be trained. In this work, we first demonstrate the ineffectiveness of this small-loss trick. Then, we propose a novel discriminator metric called confidence error and a sieving strategy called CONFES to effectively differentiate between the clean and noisy samples. We experimentally illustrate the superior performance of our proposed approach compared to recent studies on various settings such as synthetic and real-world label noise.


BadLabel: A Robust Perspective on Evaluating and Enhancing Label-noise Learning

Label-noise learning (LNL) aims to increase the model's generalization g...

Robustness of Conditional GANs to Noisy Labels

We study the problem of learning conditional generators from noisy label...

Self-paced Resistance Learning against Overfitting on Noisy Labels

Noisy labels composed of correct and corrupted ones are pervasive in pra...

Over-Fit: Noisy-Label Detection based on the Overfitted Model Property

Due to the increasing need to handle the noisy label problem in a massiv...

LaplaceConfidence: a Graph-based Approach for Learning with Noisy Labels

In real-world applications, perfect labels are rarely available, making ...

Confidence Adaptive Regularization for Deep Learning with Noisy Labels

Recent studies on the memorization effects of deep neural networks on no...

Logistic-Normal Likelihoods for Heteroscedastic Label Noise in Classification

A natural way of estimating heteroscedastic label noise in regression is...