On self-supervised multi-modal representation learning: An application to Alzheimer's disease

by   Alex Fedorov, et al.

Introspection of deep supervised predictive models trained on functional and structural brain imaging may uncover novel markers of Alzheimer's disease (AD). However, supervised training is prone to learning from spurious features (shortcut learning) impairing its value in the discovery process. Deep unsupervised and, recently, contrastive self-supervised approaches, not biased to classification, are better candidates for the task. Their multimodal options specifically offer additional regularization via modality interactions. In this paper, we introduce a way to exhaustively consider multimodal architectures for contrastive self-supervised fusion of fMRI and MRI of AD patients and controls. We show that this multimodal fusion results in representations that improve the results of the downstream classification for both modalities. We investigate the fused self-supervised features projected into the brain space and introduce a numerically stable way to do so.


page 3

page 4


Improving the Modality Representation with Multi-View Contrastive Learning for Multimodal Sentiment Analysis

Modality representation learning is an important problem for multimodal ...

Self-supervised multimodal neuroimaging yields predictive representations for a spectrum of Alzheimer's phenotypes

Recent neuroimaging studies that focus on predicting brain disorders via...

Taxonomy of multimodal self-supervised representation learning

Sensory input from multiple sources is crucial for robust and coherent h...

Multimodal Contrastive Learning and Tabular Attention for Automated Alzheimer's Disease Prediction

Alongside neuroimaging such as MRI scans and PET, Alzheimer's disease (A...

DeepBrainPrint: A Novel Contrastive Framework for Brain MRI Re-Identification

Recent advances in MRI have led to the creation of large datasets. With ...

Self-supervised Feature Learning via Exploiting Multi-modal Data for Retinal Disease Diagnosis

The automatic diagnosis of various retinal diseases from fundus images i...

Multimodal Representations Learning and Adversarial Hypergraph Fusion for Early Alzheimer's Disease Prediction

Multimodal neuroimage can provide complementary information about the de...

Code Repositories


Fusion is a self-supervised framework for data with multiple sources — specifically, this framework aims to support neuroimaging applications.

view repo

Please sign up or login with your details

Forgot password? Click here to reset