Learning to Rectify for Robust Learning with Noisy Labels

11/08/2021
by   Haoliang Sun, et al.
0

Label noise significantly degrades the generalization ability of deep models in applications. Effective strategies and approaches, e.g. re-weighting, or loss correction, are designed to alleviate the negative impact of label noise when training a neural network. Those existing works usually rely on the pre-specified architecture and manually tuning the additional hyper-parameters. In this paper, we propose warped probabilistic inference (WarPI) to achieve adaptively rectifying the training procedure for the classification network within the meta-learning scenario. In contrast to the deterministic models, WarPI is formulated as a hierarchical probabilistic model by learning an amortization meta-network, which can resolve sample ambiguity and be therefore more robust to serious label noise. Unlike the existing approximated weighting function of directly generating weight values from losses, our meta-network is learned to estimate a rectifying vector from the input of the logits and labels, which has the capability of leveraging sufficient information lying in them. This provides an effective way to rectify the learning procedure for the classification network, demonstrating a significant improvement of the generalization ability. Besides, modeling the rectifying vector as a latent variable and learning the meta-network can be seamlessly integrated into the SGD optimization of the classification network. We evaluate WarPI on four benchmarks of robust learning with noisy labels and achieve the new state-of-the-art under variant noise types. Extensive study and analysis also demonstrate the effectiveness of our model.

READ FULL TEXT
research
08/03/2020

Learning to Purify Noisy Labels via Meta Soft Label Corrector

Recent deep neural networks (DNNs) can easily overfit to biased training...
research
12/09/2020

MetaInfoNet: Learning Task-Guided Information for Sample Reweighting

Deep neural networks have been shown to easily overfit to biased trainin...
research
10/22/2022

MetaASSIST: Robust Dialogue State Tracking with Meta Learning

Existing dialogue datasets contain lots of noise in their state annotati...
research
01/18/2023

Improve Noise Tolerance of Robust Loss via Noise-Awareness

Robust loss minimization is an important strategy for handling robust le...
research
05/28/2022

Deep Learning with Label Noise: A Hierarchical Approach

Deep neural networks are susceptible to label noise. Existing methods to...
research
01/30/2022

Do We Need to Penalize Variance of Losses for Learning with Label Noise?

Algorithms which minimize the averaged loss have been widely designed fo...
research
12/05/2020

A Survey on Deep Learning with Noisy Labels: How to train your model when you cannot trust on the annotations?

Noisy Labels are commonly present in data sets automatically collected f...

Please sign up or login with your details

Forgot password? Click here to reset