DeepAI AI Chat
Log In Sign Up

Do We Really Need Gold Samples for Sample Weighting Under Label Noise?

by   Aritra Ghosh, et al.

Learning with labels noise has gained significant traction recently due to the sensitivity of deep neural networks under label noise under common loss functions. Losses that are theoretically robust to label noise, however, often makes training difficult. Consequently, several recently proposed methods, such as Meta-Weight-Net (MW-Net), use a small number of unbiased, clean samples to learn a weighting function that downweights samples that are likely to have corrupted labels under the meta-learning framework. However, obtaining such a set of clean samples is not always feasible in practice. In this paper, we analytically show that one can easily train MW-Net without access to clean samples simply by using a loss function that is robust to label noise, such as mean absolute error, as the meta objective to train the weighting network. We experimentally show that our method beats all existing methods that do not use clean samples and performs on-par with methods that use gold samples on benchmark datasets across various noise types and noise rates.


Label Noise Types and Their Effects on Deep Learning

The recent success of deep learning is mostly due to the availability of...

MetaInfoNet: Learning Task-Guided Information for Sample Reweighting

Deep neural networks have been shown to easily overfit to biased trainin...

Label Noise-Robust Learning using a Confidence-Based Sieving Strategy

In learning tasks with label noise, boosting model robustness against ov...

MetaASSIST: Robust Dialogue State Tracking with Meta Learning

Existing dialogue datasets contain lots of noise in their state annotati...

Learning to Select Pivotal Samples for Meta Re-weighting

Sample re-weighting strategies provide a promising mechanism to deal wit...

Did You Train on My Dataset? Towards Public Dataset Protection with Clean-Label Backdoor Watermarking

The huge supporting training data on the Internet has been a key factor ...

Improving MAE against CCE under Label Noise

Label noise is inherent in many deep learning tasks when the training se...