Curriculum Loss: Robust Learning and Generalization against Label Corruption

05/24/2019
by   Yueming Lyu, et al.
0

Generalization is vital important for many deep network models. It becomes more challenging when high robustness is required for learning with noisy labels. The 0-1 loss has monotonic relationship between empirical adversary (reweighted) risk, and it is robust to outliers. However, it is also difficult to optimize. To efficiently optimize 0-1 loss while keeping its robust properties, we propose a very simple and efficient loss, i.e. curriculum loss (CL). Our CL is a tighter upper bound of the 0-1 loss compared with conventional summation based surrogate losses. Moreover, CL can adaptively select samples for training as a curriculum learning. To handle large rate of noisy label corruption, we extend our curriculum loss to a more general form that can automatically prune the estimated noisy samples during training. Experimental results on noisy MNIST, CIFAR10 and CIFAR100 dataset validate the robustness of the proposed loss.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2023

Noisy-label Learning with Sample Selection based on Noise Rate Estimate

Noisy-labels are challenging for deep learning due to the high capacity ...
research
12/08/2022

Logit Clipping for Robust Learning against Label Noise

In the presence of noisy labels, designing robust loss functions is crit...
research
06/14/2022

Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt

Training on web-scale data can take months. But most computation and tim...
research
07/09/2022

Learning Robust Representation for Joint Grading of Ophthalmic Diseases via Adaptive Curriculum and Feature Disentanglement

Diabetic retinopathy (DR) and diabetic macular edema (DME) are leading c...
research
01/13/2020

Rethinking Curriculum Learning with Incremental Labels and Adaptive Compensation

Like humans, deep networks learn better when samples are organized and i...
research
02/16/2020

Learning Adaptive Loss for Robust Learning with Noisy Labels

Robust loss minimization is an important strategy for handling robust le...
research
04/30/2020

Improved Natural Language Generation via Loss Truncation

Neural language models are usually trained to match the distributional p...

Please sign up or login with your details

Forgot password? Click here to reset