Self-Filtering: A Noise-Aware Sample Selection for Label Noise with Confidence Penalization

08/24/2022
by   Qi Wei, et al.
0

Sample selection is an effective strategy to mitigate the effect of label noise in robust learning. Typical strategies commonly apply the small-loss criterion to identify clean samples. However, those samples lying around the decision boundary with large losses usually entangle with noisy examples, which would be discarded with this criterion, leading to the heavy degeneration of the generalization performance. In this paper, we propose a novel selection strategy, Self-Filtering (SFT), that utilizes the fluctuation of noisy examples in historical predictions to filter them, which can avoid the selection bias of the small-loss criterion for the boundary examples. Specifically, we introduce a memory bank module that stores the historical predictions of each example and dynamically updates to support the selection for the subsequent learning iteration. Besides, to reduce the accumulated error of the sample selection bias of SFT, we devise a regularization term to penalize the confident output distribution. By increasing the weight of the misclassified categories with this term, the loss function is robust to label noise in mild conditions. We conduct extensive experiments on three benchmarks with variant noise types and achieve the new state-of-the-art. Ablation studies and further analysis verify the virtue of SFT for sample selection in robust learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2022

PARS: Pseudo-Label Aware Robust Sample Selection for Learning with Noisy Labels

Acquiring accurate labels on large-scale datasets is both time consuming...
research
09/02/2023

Regularly Truncated M-estimators for Learning with Noisy Labels

The sample selection approach is very popular in learning with noisy lab...
research
03/24/2021

Jo-SRC: A Contrastive Approach for Combating Noisy Labels

Due to the memorization effect in Deep Neural Networks (DNNs), training ...
research
04/26/2021

An Exploration into why Output Regularization Mitigates Label Noise

Label noise presents a real challenge for supervised learning algorithms...
research
08/23/2022

Learning from Noisy Labels with Coarse-to-Fine Sample Credibility Modeling

Training deep neural network (DNN) with noisy labels is practically chal...
research
12/06/2022

Robust Point Cloud Segmentation with Noisy Annotations

Point cloud segmentation is a fundamental task in 3D. Despite recent pro...
research
11/23/2017

Bias-Compensated Normalized Maximum Correntropy Criterion Algorithm for System Identification with Noisy Input

This paper proposed a bias-compensated normalized maximum correntropy cr...

Please sign up or login with your details

Forgot password? Click here to reset