Probabilistic End-to-end Noise Correction for Learning with Noisy Labels

03/19/2019
by   Kun Yi, et al.
0

Deep learning has achieved excellent performance in various computer vision tasks, but requires a lot of training examples with clean labels. It is easy to collect a dataset with noisy labels, but such noise makes networks overfit seriously and accuracies drop dramatically. To address this problem, we propose an end-to-end framework called PENCIL, which can update both network parameters and label estimations as label distributions. PENCIL is independent of the backbone network structure and does not need an auxiliary clean dataset or prior information about noise, thus it is more general and robust than existing methods and is easy to apply. PENCIL outperforms previous state-of-the-art methods by large margins on both synthetic and real-world datasets with different noise types and noise rates. Experiments show that PENCIL is robust on clean datasets, too.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/17/2022

PENCIL: Deep Learning with Noisy Labels

Deep learning has achieved excellent performance in various computer vis...
research
03/29/2022

Agreement or Disagreement in Noise-tolerant Mutual Learning?

Deep learning has made many remarkable achievements in many fields but s...
research
02/27/2022

Synergistic Network Learning and Label Correction for Noise-robust Image Classification

Large training datasets almost always contain examples with inaccurate o...
research
05/27/2021

Training Classifiers that are Universally Robust to All Label Noise Levels

For classification tasks, deep neural networks are prone to overfitting ...
research
11/16/2020

Robust Deep Learning with Active Noise Cancellation for Spatial Computing

This paper proposes CANC, a Co-teaching Active Noise Cancellation method...
research
05/27/2021

Using Early-Learning Regularization to Classify Real-World Noisy Data

The memorization problem is well-known in the field of computer vision. ...
research
02/14/2018

Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise

The growing importance of massive datasets with the advent of deep learn...

Please sign up or login with your details

Forgot password? Click here to reset