Self-Supervised Learning from Unlabeled Fundus Photographs Improves Segmentation of the Retina

by   Jan Kukačka, et al.

Fundus photography is the primary method for retinal imaging and essential for diabetic retinopathy prevention. Automated segmentation of fundus photographs would improve the quality, capacity, and cost-effectiveness of eye care screening programs. However, current segmentation methods are not robust towards the diversity in imaging conditions and pathologies typical for real-world clinical applications. To overcome these limitations, we utilized contrastive self-supervised learning to exploit the large variety of unlabeled fundus images in the publicly available EyePACS dataset. We pre-trained an encoder of a U-Net, which we later fine-tuned on several retinal vessel and lesion segmentation datasets. We demonstrate for the first time that by using contrastive self-supervised learning, the pre-trained network can recognize blood vessels, optic disc, fovea, and various lesions without being provided any labels. Furthermore, when fine-tuned on a downstream blood vessel segmentation task, such pre-trained networks achieve state-of-the-art performance on images from different datasets. Additionally, the pre-training also leads to shorter training times and an improved few-shot performance on both blood vessel and lesion segmentation tasks. Altogether, our results showcase the benefits of contrastive self-supervised pre-training which can play a crucial role in real-world clinical applications requiring robust models able to adapt to new devices with only a few annotated samples.



There are no comments yet.


page 2

page 4


Evaluating the fairness of fine-tuning strategies in self-supervised learning

In this work we examine how fine-tuning impacts the fairness of contrast...

Exploring Self-Supervised Representation Ensembles for COVID-19 Cough Classification

The usage of smartphone-collected respiratory sound, trained with deep l...

Semi-supervised Learning using Denoising Autoencoders for Brain Lesion Detection and Segmentation

The work presented explores the use of denoising autoencoders (DAE) for ...

SignBERT: Pre-Training of Hand-Model-Aware Representation for Sign Language Recognition

Hand gesture serves as a critical role in sign language. Current deep-le...

Deep Cervix Model Development from Heterogeneous and Partially Labeled Image Datasets

Cervical cancer is the fourth most common cancer in women worldwide. The...

BENDR: using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data

Deep neural networks (DNNs) used for brain-computer-interface (BCI) clas...

Self-Supervised Learning for Spinal MRIs

A significant proportion of patients scanned in a clinical setting have ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.