Learning with Noisy Labels over Imbalanced Subpopulations

11/16/2022
by   Mingcai Chen, et al.
0

Learning with Noisy Labels (LNL) has attracted significant attention from the research community. Many recent LNL methods rely on the assumption that clean samples tend to have "small loss". However, this assumption always fails to generalize to some real-world cases with imbalanced subpopulations, i.e., training subpopulations varying in sample size or recognition difficulty. Therefore, recent LNL methods face the risk of misclassifying those "informative" samples (e.g., hard samples or samples in the tail subpopulations) into noisy samples, leading to poor generalization performance. To address the above issue, we propose a novel LNL method to simultaneously deal with noisy labels and imbalanced subpopulations. It first leverages sample correlation to estimate samples' clean probabilities for label correction and then utilizes corrected labels for Distributionally Robust Optimization (DRO) to further improve the robustness. Specifically, in contrast to previous works using classification loss as the selection criterion, we introduce a feature-based metric that takes the sample correlation into account for estimating samples' clean probabilities. Then, we refurbish the noisy labels using the estimated clean probabilities and the pseudo-labels from the model's predictions. With refurbished labels, we use DRO to train the model to be robust to subpopulation imbalance. Extensive experiments on a wide range of benchmarks demonstrate that our technique can consistently improve current state-of-the-art robust learning paradigms against noisy labels, especially when encountering imbalanced subpopulations.

READ FULL TEXT
research
01/26/2022

PARS: Pseudo-Label Aware Robust Sample Selection for Learning with Noisy Labels

Acquiring accurate labels on large-scale datasets is both time consuming...
research
08/23/2022

Learning from Noisy Labels with Coarse-to-Fine Sample Credibility Modeling

Training deep neural network (DNN) with noisy labels is practically chal...
research
11/19/2022

Robust AUC Optimization under the Supervision of Clean Data

AUC (area under the ROC curve) optimization algorithms have drawn much a...
research
12/10/2020

One for More: Selecting Generalizable Samples for Generalizable ReID Model

Current training objectives of existing person Re-IDentification (ReID) ...
research
07/12/2022

Uncertainty-Aware Learning Against Label Noise on Imbalanced Datasets

Learning against label noise is a vital topic to guarantee a reliable pe...
research
03/25/2021

Transform consistency for learning with noisy labels

It is crucial to distinguish mislabeled samples for dealing with noisy l...
research
07/29/2022

Centrality and Consistency: Two-Stage Clean Samples Identification for Learning with Instance-Dependent Noisy Labels

Deep models trained with noisy labels are prone to over-fitting and stru...

Please sign up or login with your details

Forgot password? Click here to reset