Semi-weakly Supervised Contrastive Representation Learning for Retinal Fundus Images

08/04/2021
by   Boon Peng Yap, et al.
4

We explore the value of weak labels in learning transferable representations for medical images. Compared to hand-labeled datasets, weak or inexact labels can be acquired in large quantities at significantly lower cost and can provide useful training signals for data-hungry models such as deep neural networks. We consider weak labels in the form of pseudo-labels and propose a semi-weakly supervised contrastive learning (SWCL) framework for representation learning using semi-weakly annotated images. Specifically, we train a semi-supervised model to propagate labels from a small dataset consisting of diverse image-level annotations to a large unlabeled dataset. Using the propagated labels, we generate a patch-level dataset for pretraining and formulate a multi-label contrastive learning objective to capture position-specific features encoded in each patch. We empirically validate the transfer learning performance of SWCL on seven public retinal fundus datasets, covering three disease classification tasks and two anatomical structure segmentation tasks. Our experiment results suggest that, under very low data regime, large-scale ImageNet pretraining on improved architecture remains a very strong baseline, and recently proposed self-supervised methods falter in segmentation tasks, possibly due to the strong invariant constraint imposed. Our method surpasses all prior self-supervised methods and standard cross-entropy training, while closing the gaps with ImageNet pretraining.

READ FULL TEXT

page 1

page 4

page 7

research
12/17/2021

Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation

Supervised deep learning-based methods yield accurate results for medica...
research
02/16/2023

Learning to diagnose cirrhosis from radiological and histological labels with joint self and weakly-supervised pretraining strategies

Identifying cirrhosis is key to correctly assess the health of the liver...
research
07/10/2023

Weakly-supervised positional contrastive learning: application to cirrhosis classification

Large medical imaging datasets can be cheaply and quickly annotated with...
research
09/13/2022

HistoPerm: A Permutation-Based View Generation Approach for Learning Histopathologic Feature Representations

Recently, deep learning methods have been successfully applied to solve ...
research
11/02/2022

On the Informativeness of Supervision Signals

Learning transferable representations by training a classifier is a well...
research
08/04/2022

Metadata-enhanced contrastive learning from retinal optical coherence tomography images

Supervised deep learning algorithms hold great potential to automate scr...
research
10/18/2022

Depth Contrast: Self-Supervised Pretraining on 3DPM Images for Mining Material Classification

This work presents a novel self-supervised representation learning metho...

Please sign up or login with your details

Forgot password? Click here to reset