Self-supervised multimodal neuroimaging yields predictive representations for a spectrum of Alzheimer's phenotypes

by   Alex Fedorov, et al.

Recent neuroimaging studies that focus on predicting brain disorders via modern machine learning approaches commonly include a single modality and rely on supervised over-parameterized models.However, a single modality provides only a limited view of the highly complex brain. Critically, supervised models in clinical settings lack accurate diagnostic labels for training. Coarse labels do not capture the long-tailed spectrum of brain disorder phenotypes, which leads to a loss of generalizability of the model that makes them less useful in diagnostic settings. This work presents a novel multi-scale coordinated framework for learning multiple representations from multimodal neuroimaging data. We propose a general taxonomy of informative inductive biases to capture unique and joint information in multimodal self-supervised fusion. The taxonomy forms a family of decoder-free models with reduced computational complexity and a propensity to capture multi-scale relationships between local and global representations of the multimodal inputs. We conduct a comprehensive evaluation of the taxonomy using functional and structural magnetic resonance imaging (MRI) data across a spectrum of Alzheimer's disease phenotypes and show that self-supervised models reveal disorder-relevant brain regions and multimodal links without access to the labels during pre-training. The proposed multimodal self-supervised learning yields representations with improved classification performance for both modalities. The concomitant rich and flexible unsupervised deep learning framework captures complex multimodal relationships and provides predictive performance that meets or exceeds that of a more narrow supervised classification analysis. We present elaborate quantitative evidence of how this framework can significantly advance our search for missing links in complex brain disorders.


page 11

page 16

page 18


On self-supervised multi-modal representation learning: An application to Alzheimer's disease

Introspection of deep supervised predictive models trained on functional...

Taxonomy of multimodal self-supervised representation learning

Sensory input from multiple sources is crucial for robust and coherent h...

Cross-Domain Self-Supervised Deep Learning for Robust Alzheimer's Disease Progression Modeling

Developing successful artificial intelligence systems in practice depend...

Multimodal Representation Learning of Cardiovascular Magnetic Resonance Imaging

Self-supervised learning is crucial for clinical imaging applications, g...

A Heterogeneous Graph Based Framework for Multimodal Neuroimaging Fusion Learning

Here, we present a Heterogeneous Graph neural network for Multimodal neu...

Please sign up or login with your details

Forgot password? Click here to reset