Regularly Truncated M-estimators for Learning with Noisy Labels

09/02/2023
by   Xiaobo Xia, et al.
0

The sample selection approach is very popular in learning with noisy labels. As deep networks learn pattern first, prior methods built on sample selection share a similar training procedure: the small-loss examples can be regarded as clean examples and used for helping generalization, while the large-loss examples are treated as mislabeled ones and excluded from network parameter updates. However, such a procedure is arguably debatable from two folds: (a) it does not consider the bad influence of noisy labels in selected small-loss examples; (b) it does not make good use of the discarded large-loss examples, which may be clean or have meaningful information for generalization. In this paper, we propose regularly truncated M-estimators (RTME) to address the above two issues simultaneously. Specifically, RTME can alternately switch modes between truncated M-estimators and original M-estimators. The former can adaptively select small-losses examples without knowing the noise rate and reduce the side-effects of noisy labels in them. The latter makes the possibly clean examples but with large losses involved to help generalization. Theoretically, we demonstrate that our strategies are label-noise-tolerant. Empirically, comprehensive experimental results show that our method can outperform multiple baselines and is robust to broad noise types and levels.

READ FULL TEXT
research
05/28/2023

BadLabel: A Robust Perspective on Evaluating and Enhancing Label-noise Learning

Label-noise learning (LNL) aims to increase the model's generalization g...
research
06/01/2021

Sample Selection with Uncertainty of Losses for Learning with Noisy Labels

In learning with noisy labels, the sample selection approach is very pop...
research
08/26/2023

Late Stopping: Avoiding Confidently Learning from Mislabeled Examples

Sample selection is a prevalent method in learning with noisy labels, wh...
research
08/24/2022

Self-Filtering: A Noise-Aware Sample Selection for Label Noise with Confidence Penalization

Sample selection is an effective strategy to mitigate the effect of labe...
research
07/16/2021

When does loss-based prioritization fail?

Not all examples are created equal, but standard deep neural network tra...
research
06/01/2021

Instance Correction for Learning with Open-set Noisy Labels

The problem of open-set noisy labels denotes that part of training data ...
research
01/02/2023

Knockoffs-SPR: Clean Sample Selection in Learning with Noisy Labels

A noisy training set usually leads to the degradation of the generalizat...

Please sign up or login with your details

Forgot password? Click here to reset