DeepAI AI Chat
Log In Sign Up

Do We Need to Penalize Variance of Losses for Learning with Label Noise?

by   Yexiong Lin, et al.

Algorithms which minimize the averaged loss have been widely designed for dealing with noisy labels. Intuitively, when there is a finite training sample, penalizing the variance of losses will improve the stability and generalization of the algorithms. Interestingly, we found that the variance should be increased for the problem of learning with noisy labels. Specifically, increasing the variance will boost the memorization effects and reduce the harmfulness of incorrect labels. By exploiting the label noise transition matrix, regularizers can be easily designed to reduce the variance of losses and be plugged in many existing algorithms. Empirically, the proposed method by increasing the variance of losses significantly improves the generalization ability of baselines on both synthetic and real-world datasets.


page 1

page 2

page 3

page 4


Logit Clipping for Robust Learning against Label Noise

In the presence of noisy labels, designing robust loss functions is crit...

Sample Selection with Uncertainty of Losses for Learning with Noisy Labels

In learning with noisy labels, the sample selection approach is very pop...

Learning to Rectify for Robust Learning with Noisy Labels

Label noise significantly degrades the generalization ability of deep mo...

GMM Discriminant Analysis with Noisy Label for Each Class

Real world datasets often contain noisy labels, and learning from such d...

From Noisy Prediction to True Label: Noisy Prediction Calibration via Generative Model

Noisy labels are inevitable yet problematic in machine learning society....

Sharp Analysis of Learning with Discrete Losses

The problem of devising learning strategies for discrete losses (e.g., m...

An Exploration into why Output Regularization Mitigates Label Noise

Label noise presents a real challenge for supervised learning algorithms...