Self-Supervised Learning from Unlabeled Fundus Photographs Improves Segmentation of the Retina

08/05/2021
by   Jan Kukačka, et al.
0

Fundus photography is the primary method for retinal imaging and essential for diabetic retinopathy prevention. Automated segmentation of fundus photographs would improve the quality, capacity, and cost-effectiveness of eye care screening programs. However, current segmentation methods are not robust towards the diversity in imaging conditions and pathologies typical for real-world clinical applications. To overcome these limitations, we utilized contrastive self-supervised learning to exploit the large variety of unlabeled fundus images in the publicly available EyePACS dataset. We pre-trained an encoder of a U-Net, which we later fine-tuned on several retinal vessel and lesion segmentation datasets. We demonstrate for the first time that by using contrastive self-supervised learning, the pre-trained network can recognize blood vessels, optic disc, fovea, and various lesions without being provided any labels. Furthermore, when fine-tuned on a downstream blood vessel segmentation task, such pre-trained networks achieve state-of-the-art performance on images from different datasets. Additionally, the pre-training also leads to shorter training times and an improved few-shot performance on both blood vessel and lesion segmentation tasks. Altogether, our results showcase the benefits of contrastive self-supervised pre-training which can play a crucial role in real-world clinical applications requiring robust models able to adapt to new devices with only a few annotated samples.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

10/01/2021

Evaluating the fairness of fine-tuning strategies in self-supervised learning

In this work we examine how fine-tuning impacts the fairness of contrast...
05/17/2021

Exploring Self-Supervised Representation Ensembles for COVID-19 Cough Classification

The usage of smartphone-collected respiratory sound, trained with deep l...
11/26/2016

Semi-supervised Learning using Denoising Autoencoders for Brain Lesion Detection and Segmentation

The work presented explores the use of denoising autoencoders (DAE) for ...
10/11/2021

SignBERT: Pre-Training of Hand-Model-Aware Representation for Sign Language Recognition

Hand gesture serves as a critical role in sign language. Current deep-le...
01/18/2022

Deep Cervix Model Development from Heterogeneous and Partially Labeled Image Datasets

Cervical cancer is the fourth most common cancer in women worldwide. The...
01/28/2021

BENDR: using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data

Deep neural networks (DNNs) used for brain-computer-interface (BCI) clas...
08/01/2017

Self-Supervised Learning for Spinal MRIs

A significant proportion of patients scanned in a clinical setting have ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.