DeepAI AI Chat
Log In Sign Up

Multi-Objective Interpolation Training for Robustness to Label Noise

by   Diego Ortego, et al.
Insight Centre for Data Analytics

Deep neural networks trained with standard cross-entropy loss memorize noisy labels, which degrades their performance. Most research to mitigate this memorization proposes new robust classification loss functions. Conversely, we explore the behavior of supervised contrastive learning under label noise to understand how it can improve image classification in these scenarios. In particular, we propose a Multi-Objective Interpolation Training (MOIT) approach that jointly exploits contrastive learning and classification. We show that standard contrastive learning degrades in the presence of label noise and propose an interpolation training strategy to mitigate this behavior. We further propose a novel label noise detection method that exploits the robust feature representations learned via contrastive learning to estimate per-sample soft-labels whose disagreements with the original labels accurately identify noisy samples. This detection allows treating noisy samples as unlabeled and training a classifier in a semi-supervised manner. We further propose MOIT+, a refinement of MOIT by fine-tuning on detected clean samples. Hyperparameter and ablation studies verify the key components of our method. Experiments on synthetic and real-world noise benchmarks demonstrate that MOIT/MOIT+ achieves state-of-the-art results. Code is available at


Contrastive Learning Improves Model Robustness Under Label Noise

Deep neural network-based classifiers trained with the categorical cross...

On Learning Contrastive Representations for Learning with Noisy Labels

Deep neural networks are able to memorize noisy labels easily with a sof...

Noise-Robust Bidirectional Learning with Dynamic Sample Reweighting

Deep neural networks trained with standard cross-entropy loss are more p...

Label-noise-tolerant medical image classification via self-attention and self-supervised learning

Deep neural networks (DNNs) have been widely applied in medical image cl...

Brain-Aware Replacements for Supervised Contrastive Learning

We propose a novel framework for Alzheimer's disease (AD) detection usin...

A Framework using Contrastive Learning for Classification with Noisy Labels

We propose a framework using contrastive learning as a pre-training task...

Consistency Regularization Can Improve Robustness to Label Noise

Consistency regularization is a commonly-used technique for semi-supervi...

Code Repositories


Official implementation for: "Multi-Objective Interpolation Training for Robustness to Label Noise"

view repo