Multi-Objective Interpolation Training for Robustness to Label Noise

12/08/2020
by   Diego Ortego, et al.
10

Deep neural networks trained with standard cross-entropy loss memorize noisy labels, which degrades their performance. Most research to mitigate this memorization proposes new robust classification loss functions. Conversely, we explore the behavior of supervised contrastive learning under label noise to understand how it can improve image classification in these scenarios. In particular, we propose a Multi-Objective Interpolation Training (MOIT) approach that jointly exploits contrastive learning and classification. We show that standard contrastive learning degrades in the presence of label noise and propose an interpolation training strategy to mitigate this behavior. We further propose a novel label noise detection method that exploits the robust feature representations learned via contrastive learning to estimate per-sample soft-labels whose disagreements with the original labels accurately identify noisy samples. This detection allows treating noisy samples as unlabeled and training a classifier in a semi-supervised manner. We further propose MOIT+, a refinement of MOIT by fine-tuning on detected clean samples. Hyperparameter and ablation studies verify the key components of our method. Experiments on synthetic and real-world noise benchmarks demonstrate that MOIT/MOIT+ achieves state-of-the-art results. Code is available at https://git.io/JI40X.

READ FULL TEXT
research
04/19/2021

Contrastive Learning Improves Model Robustness Under Label Noise

Deep neural network-based classifiers trained with the categorical cross...
research
03/03/2022

On Learning Contrastive Representations for Learning with Noisy Labels

Deep neural networks are able to memorize noisy labels easily with a sof...
research
09/03/2022

Noise-Robust Bidirectional Learning with Dynamic Sample Reweighting

Deep neural networks trained with standard cross-entropy loss are more p...
research
06/16/2023

Label-noise-tolerant medical image classification via self-attention and self-supervised learning

Deep neural networks (DNNs) have been widely applied in medical image cl...
research
07/11/2022

Brain-Aware Replacements for Supervised Contrastive Learning

We propose a novel framework for Alzheimer's disease (AD) detection usin...
research
04/19/2021

A Framework using Contrastive Learning for Classification with Noisy Labels

We propose a framework using contrastive learning as a pre-training task...
research
10/04/2021

Consistency Regularization Can Improve Robustness to Label Noise

Consistency regularization is a commonly-used technique for semi-supervi...

Please sign up or login with your details

Forgot password? Click here to reset