SELC: Self-Ensemble Label Correction Improves Learning with Noisy Labels

05/02/2022
by   Yangdi Lu, et al.
0

Deep neural networks are prone to overfitting noisy labels, resulting in poor generalization performance. To overcome this problem, we present a simple and effective method self-ensemble label correction (SELC) to progressively correct noisy labels and refine the model. We look deeper into the memorization behavior in training with noisy labels and observe that the network outputs are reliable in the early stage. To retain this reliable knowledge, SELC uses ensemble predictions formed by an exponential moving average of network outputs to update the original noisy labels. We show that training with SELC refines the model by gradually reducing supervision from noisy labels and increasing supervision from ensemble predictions. Despite its simplicity, compared with many state-of-the-art methods, SELC obtains more promising and stable results in the presence of class-conditional, instance-dependent, and real-world label noise. The code is available at https://github.com/MacLLL/SELC.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2019

SELF: Learning to Filter Noisy Labels with Self-Ensembling

Deep neural networks (DNNs) have been shown to over-fit a dataset when b...
research
06/29/2022

Adversarial Ensemble Training by Jointly Learning Label Dependencies and Member Models

Training an ensemble of different sub-models has empirically proven to b...
research
02/18/2021

Deep Learning for Suicide and Depression Identification with Unsupervised Label Correction

Early detection of suicidal ideation in depressed individuals can allow ...
research
03/13/2021

Ensemble Learning with Manifold-Based Data Splitting for Noisy Label Correction

Label noise in training data can significantly degrade a model's general...
research
06/13/2020

Generalization by Recognizing Confusion

A recently-proposed technique called self-adaptive training augments mod...
research
09/03/2022

Noise-Robust Bidirectional Learning with Dynamic Sample Reweighting

Deep neural networks trained with standard cross-entropy loss are more p...
research
06/30/2022

ProSelfLC: Progressive Self Label Correction Towards A Low-Temperature Entropy State

To train robust deep neural networks (DNNs), we systematically study sev...

Please sign up or login with your details

Forgot password? Click here to reset