Self-supervised Feature Learning via Exploiting Multi-modal Data for Retinal Disease Diagnosis

07/21/2020
by   Xiaomeng Li, et al.
14

The automatic diagnosis of various retinal diseases from fundus images is important to support clinical decision-making. However, developing such automatic solutions is challenging due to the requirement of a large amount of human-annotated data. Recently, unsupervised/self-supervised feature learning techniques receive a lot of attention, as they do not need massive annotations. Most of the current self-supervised methods are analyzed with single imaging modality and there is no method currently utilize multi-modal images for better results. Considering that the diagnostics of various vitreoretinal diseases can greatly benefit from another imaging modality, e.g., FFA, this paper presents a novel self-supervised feature learning method by effectively exploiting multi-modal data for retinal disease diagnosis. To achieve this, we first synthesize the corresponding FFA modality and then formulate a patient feature-based softmax embedding objective. Our objective learns both modality-invariant features and patient-similarity features. Through this mechanism, the neural network captures the semantically shared information across different modalities and the apparent visual similarity between patients. We evaluate our method on two public benchmark datasets for retinal disease diagnosis. The experimental results demonstrate that our method clearly outperforms other self-supervised feature learning methods and is comparable to the supervised baseline.

READ FULL TEXT

page 1

page 3

page 8

page 9

research
02/20/2023

A Novel Collaborative Self-Supervised Learning Method for Radiomic Data

The computer-aided disease diagnosis from radiomic data is important in ...
research
03/25/2021

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting

Self-supervised learning has gained prominence due to its efficacy at le...
research
12/25/2020

On self-supervised multi-modal representation learning: An application to Alzheimer's disease

Introspection of deep supervised predictive models trained on functional...
research
08/01/2023

Fundus-Enhanced Disease-Aware Distillation Model for Retinal Disease Classification from OCT Images

Optical Coherence Tomography (OCT) is a novel and effective screening to...
research
05/28/2020

Self-supervised Modal and View Invariant Feature Learning

Most of the existing self-supervised feature learning methods for 3D dat...
research
04/04/2022

Multi-Modal Hypergraph Diffusion Network with Dual Prior for Alzheimer Classification

The automatic early diagnosis of prodromal stages of Alzheimer's disease...
research
07/21/2021

MG-NET: Leveraging Pseudo-Imaging for Multi-Modal Metagenome Analysis

The emergence of novel pathogens and zoonotic diseases like the SARS-CoV...

Please sign up or login with your details

Forgot password? Click here to reset