Improving Medical Image Classification in Noisy Labels Using Only Self-supervised Pretraining

08/08/2023
by   Bidur Khanal, et al.
1

Noisy labels hurt deep learning-based supervised image classification performance as the models may overfit the noise and learn corrupted feature extractors. For natural image classification training with noisy labeled data, model initialization with contrastive self-supervised pretrained weights has shown to reduce feature corruption and improve classification performance. However, no works have explored: i) how other self-supervised approaches, such as pretext task-based pretraining, impact the learning with noisy label, and ii) any self-supervised pretraining methods alone for medical images in noisy label settings. Medical images often feature smaller datasets and subtle inter class variations, requiring human expertise to ensure correct classification. Thus, it is not clear if the methods improving learning with noisy labels in natural image datasets such as CIFAR would also help with medical images. In this work, we explore contrastive and pretext task-based self-supervised pretraining to initialize the weights of a deep learning classification model for two medical datasets with self-induced noisy labels – NCT-CRC-HE-100K tissue histological images and COVID-QU-Ex chest X-ray images. Our results show that models initialized with pretrained weights obtained from self-supervised learning can effectively learn better features and improve robustness against noisy labels.

READ FULL TEXT

page 14

page 15

research
01/13/2021

Big Self-Supervised Models Advance Medical Image Classification

Self-supervised pretraining followed by supervised fine-tuning has seen ...
research
06/15/2023

A Comparison of Self-Supervised Pretraining Approaches for Predicting Disease Risk from Chest Radiograph Images

Deep learning is the state-of-the-art for medical imaging tasks, but req...
research
05/13/2023

How to Train Your CheXDragon: Training Chest X-Ray Models for Transfer to Novel Tasks and Healthcare Systems

Self-supervised learning (SSL) enables label efficient training for mach...
research
06/25/2021

On the Robustness of Pretraining and Self-Supervision for a Deep Learning-based Analysis of Diabetic Retinopathy

There is an increasing number of medical use-cases where classification ...
research
08/24/2022

Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models

Self supervised contrastive learning based pretraining allows developmen...
research
02/09/2021

Flow-Mixup: Classifying Multi-labeled Medical Images with Corrupted Labels

In clinical practice, medical image interpretation often involves multi-...
research
08/11/2022

Differencing based Self-supervised pretraining for Scene Change Detection

Scene change detection (SCD), a crucial perception task, identifies chan...

Please sign up or login with your details

Forgot password? Click here to reset