Log In Sign Up

Collaborative Label Correction via Entropy Thresholding

by   Hao Wu, et al.

Deep neural networks (DNNs) have the capacity to fit extremely noisy labels nonetheless they tend to learn data with clean labels first and then memorize those with noisy labels. We examine this behavior in light of the Shannon entropy of the predictions and demonstrate the low entropy predictions determined by a given threshold are much more reliable as the supervision than the original noisy labels. It also shows the advantage in maintaining more training samples than previous methods. Then, we power this entropy criterion with the Collaborative Label Correction (CLC) framework to further avoid undesired local minimums of the single network. A range of experiments have been conducted on multiple benchmarks with both synthetic and real-world settings. Extensive results indicate that our CLC outperforms several state-of-the-art methods.


page 1

page 5


Cooperative Learning for Noisy Supervision

Learning with noisy labels has gained the enormous interest in the robus...

Noisy Concurrent Training for Efficient Learning under Label Noise

Deep neural networks (DNNs) fail to learn effectively under label noise ...

ProSelfLC: Progressive Self Label Correction Towards A Low-Temperature Entropy State

To train robust deep neural networks (DNNs), we systematically study sev...

An Ensemble Noise-Robust K-fold Cross-Validation Selection Method for Noisy Labels

We consider the problem of training robust and accurate deep neural netw...

Improved Mix-up with KL-Entropy for Learning From Noisy Labels

Despite the deep neural networks (DNN) has achieved excellent performanc...

Neighborhood Collective Estimation for Noisy Label Identification and Correction

Learning with noisy labels (LNL) aims at designing strategies to improve...

Learning from Noisy Labels with Coarse-to-Fine Sample Credibility Modeling

Training deep neural network (DNN) with noisy labels is practically chal...