Uncertainty-Aware Learning Against Label Noise on Imbalanced Datasets

07/12/2022
by   Yingsong Huang, et al.
0

Learning against label noise is a vital topic to guarantee a reliable performance for deep neural networks. Recent research usually refers to dynamic noise modeling with model output probabilities and loss values, and then separates clean and noisy samples. These methods have gained notable success. However, unlike cherry-picked data, existing approaches often cannot perform well when facing imbalanced datasets, a common scenario in the real world. We thoroughly investigate this phenomenon and point out two major issues that hinder the performance, i.e., inter-class loss distribution discrepancy and misleading predictions due to uncertainty. The first issue is that existing methods often perform class-agnostic noise modeling. However, loss distributions show a significant discrepancy among classes under class imbalance, and class-agnostic noise modeling can easily get confused with noisy samples and samples in minority classes. The second issue refers to that models may output misleading predictions due to epistemic uncertainty and aleatoric uncertainty, thus existing methods that rely solely on the output probabilities may fail to distinguish confident samples. Inspired by our observations, we propose an Uncertainty-aware Label Correction framework (ULC) to handle label noise on imbalanced datasets. First, we perform epistemic uncertainty-aware class-specific noise modeling to identify trustworthy clean samples and refine/discard highly confident true/corrupted labels. Then, we introduce aleatoric uncertainty in the subsequent learning process to prevent noise accumulation in the label noise modeling process. We conduct experiments on several synthetic and real-world datasets. The results demonstrate the effectiveness of the proposed method, especially on imbalanced datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/14/2020

Which Strategies Matter for Noisy Label Classification? Insight into Loss and Uncertainty

Label noise is a critical factor that degrades the generalization perfor...
research
06/23/2022

Prototype-Anchored Learning for Learning with Imperfect Annotations

The success of deep neural networks greatly relies on the availability o...
research
11/16/2022

Learning with Noisy Labels over Imbalanced Subpopulations

Learning with Noisy Labels (LNL) has attracted significant attention fro...
research
02/07/2019

Unsupervised Data Uncertainty Learning in Visual Retrieval Systems

We introduce an unsupervised formulation to estimate heteroscedastic unc...
research
11/02/2021

Elucidating Noisy Data via Uncertainty-Aware Robust Learning

Robust learning methods aim to learn a clean target distribution from no...
research
05/07/2023

Unlocking the Power of Open Set : A New Perspective for Open-set Noisy Label Learning

Learning from noisy data has attracted much attention, where most method...
research
08/18/2021

Confidence Adaptive Regularization for Deep Learning with Noisy Labels

Recent studies on the memorization effects of deep neural networks on no...

Please sign up or login with your details

Forgot password? Click here to reset