DeepAI AI Chat
Log In Sign Up

Learning to Rectify for Robust Learning with Noisy Labels

11/08/2021
by   Haoliang Sun, et al.
0

Label noise significantly degrades the generalization ability of deep models in applications. Effective strategies and approaches, e.g. re-weighting, or loss correction, are designed to alleviate the negative impact of label noise when training a neural network. Those existing works usually rely on the pre-specified architecture and manually tuning the additional hyper-parameters. In this paper, we propose warped probabilistic inference (WarPI) to achieve adaptively rectifying the training procedure for the classification network within the meta-learning scenario. In contrast to the deterministic models, WarPI is formulated as a hierarchical probabilistic model by learning an amortization meta-network, which can resolve sample ambiguity and be therefore more robust to serious label noise. Unlike the existing approximated weighting function of directly generating weight values from losses, our meta-network is learned to estimate a rectifying vector from the input of the logits and labels, which has the capability of leveraging sufficient information lying in them. This provides an effective way to rectify the learning procedure for the classification network, demonstrating a significant improvement of the generalization ability. Besides, modeling the rectifying vector as a latent variable and learning the meta-network can be seamlessly integrated into the SGD optimization of the classification network. We evaluate WarPI on four benchmarks of robust learning with noisy labels and achieve the new state-of-the-art under variant noise types. Extensive study and analysis also demonstrate the effectiveness of our model.

READ FULL TEXT
08/03/2020

Learning to Purify Noisy Labels via Meta Soft Label Corrector

Recent deep neural networks (DNNs) can easily overfit to biased training...
12/09/2020

MetaInfoNet: Learning Task-Guided Information for Sample Reweighting

Deep neural networks have been shown to easily overfit to biased trainin...
10/22/2022

MetaASSIST: Robust Dialogue State Tracking with Meta Learning

Existing dialogue datasets contain lots of noise in their state annotati...
01/18/2023

Improve Noise Tolerance of Robust Loss via Noise-Awareness

Robust loss minimization is an important strategy for handling robust le...
01/30/2022

Do We Need to Penalize Variance of Losses for Learning with Label Noise?

Algorithms which minimize the averaged loss have been widely designed fo...
07/27/2022

NICEST: Noisy Label Correction and Training for Robust Scene Graph Generation

Nearly all existing scene graph generation (SGG) models have overlooked ...
04/30/2018

Staircase Network: structural language identification via hierarchical attentive units

Language recognition system is typically trained directly to optimize cl...