Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models

08/24/2022
by   Minhaj Nur Alam, et al.
11

Self supervised contrastive learning based pretraining allows development of robust and generalized deep learning models with small, labeled datasets, reducing the burden of label generation. This paper aims to evaluate the effect of CL based pretraining on the performance of referrable vs non referrable diabetic retinopathy (DR) classification. We have developed a CL based framework with neural style transfer (NST) augmentation to produce models with better representations and initializations for the detection of DR in color fundus images. We compare our CL pretrained model performance with two state of the art baseline models pretrained with Imagenet weights. We further investigate the model performance with reduced labeled training data (down to 10 percent) to test the robustness of the model when trained with small, labeled datasets. The model is trained and validated on the EyePACS dataset and tested independently on clinical data from the University of Illinois, Chicago (UIC). Compared to baseline models, our CL pretrained FundusNet model had higher AUC (CI) values (0.91 (0.898 to 0.930) vs 0.80 (0.783 to 0.820) and 0.83 (0.801 to 0.853) on UIC data). At 10 percent labeled training data, the FundusNet AUC was 0.81 (0.78 to 0.84) vs 0.58 (0.56 to 0.64) and 0.63 (0.60 to 0.66) in baseline models, when tested on the UIC dataset. CL based pretraining with NST significantly improves DL classification performance, helps the model generalize well (transferable from EyePACS to UIC data), and allows training with small, annotated datasets, therefore reducing ground truth annotation burden of the clinicians.

READ FULL TEXT
research
08/08/2023

Improving Medical Image Classification in Noisy Labels Using Only Self-supervised Pretraining

Noisy labels hurt deep learning-based supervised image classification pe...
research
10/11/2020

MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models

Self-supervised approaches such as Momentum Contrast (MoCo) can leverage...
research
05/26/2022

Learning to segment with limited annotations: Self-supervised pretraining with regression and contrastive loss in MRI

Obtaining manual annotations for large datasets for supervised training ...
research
05/26/2023

Three Towers: Flexible Contrastive Learning with Pretrained Image Models

We introduce Three Towers (3T), a flexible method to improve the contras...
research
07/31/2023

Pretrained deep models outperform GBDTs in Learning-To-Rank under label scarcity

While deep learning (DL) models are state-of-the-art in text and image d...
research
08/17/2021

Self-Supervised Pretraining and Controlled Augmentation Improve Rare Wildlife Recognition in UAV Images

Automated animal censuses with aerial imagery are a vital ingredient tow...
research
02/13/2023

A Comprehensive Study of Modern Architectures and Regularization Approaches on CheXpert5000

Computer aided diagnosis (CAD) has gained an increased amount of attenti...

Please sign up or login with your details

Forgot password? Click here to reset