DeepAI AI Chat
Log In Sign Up

Do We Need to Penalize Variance of Losses for Learning with Label Noise?

01/30/2022
by   Yexiong Lin, et al.
0

Algorithms which minimize the averaged loss have been widely designed for dealing with noisy labels. Intuitively, when there is a finite training sample, penalizing the variance of losses will improve the stability and generalization of the algorithms. Interestingly, we found that the variance should be increased for the problem of learning with noisy labels. Specifically, increasing the variance will boost the memorization effects and reduce the harmfulness of incorrect labels. By exploiting the label noise transition matrix, regularizers can be easily designed to reduce the variance of losses and be plugged in many existing algorithms. Empirically, the proposed method by increasing the variance of losses significantly improves the generalization ability of baselines on both synthetic and real-world datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/08/2022

Logit Clipping for Robust Learning against Label Noise

In the presence of noisy labels, designing robust loss functions is crit...
06/01/2021

Sample Selection with Uncertainty of Losses for Learning with Noisy Labels

In learning with noisy labels, the sample selection approach is very pop...
11/08/2021

Learning to Rectify for Robust Learning with Noisy Labels

Label noise significantly degrades the generalization ability of deep mo...
01/25/2022

GMM Discriminant Analysis with Noisy Label for Each Class

Real world datasets often contain noisy labels, and learning from such d...
05/02/2022

From Noisy Prediction to True Label: Noisy Prediction Calibration via Generative Model

Noisy labels are inevitable yet problematic in machine learning society....
10/16/2018

Sharp Analysis of Learning with Discrete Losses

The problem of devising learning strategies for discrete losses (e.g., m...
04/26/2021

An Exploration into why Output Regularization Mitigates Label Noise

Label noise presents a real challenge for supervised learning algorithms...