Self-supervised multimodal neuroimaging yields predictive representations for a spectrum of Alzheimer's phenotypes

09/07/2022
by   Alex Fedorov, et al.
2

Recent neuroimaging studies that focus on predicting brain disorders via modern machine learning approaches commonly include a single modality and rely on supervised over-parameterized models.However, a single modality provides only a limited view of the highly complex brain. Critically, supervised models in clinical settings lack accurate diagnostic labels for training. Coarse labels do not capture the long-tailed spectrum of brain disorder phenotypes, which leads to a loss of generalizability of the model that makes them less useful in diagnostic settings. This work presents a novel multi-scale coordinated framework for learning multiple representations from multimodal neuroimaging data. We propose a general taxonomy of informative inductive biases to capture unique and joint information in multimodal self-supervised fusion. The taxonomy forms a family of decoder-free models with reduced computational complexity and a propensity to capture multi-scale relationships between local and global representations of the multimodal inputs. We conduct a comprehensive evaluation of the taxonomy using functional and structural magnetic resonance imaging (MRI) data across a spectrum of Alzheimer's disease phenotypes and show that self-supervised models reveal disorder-relevant brain regions and multimodal links without access to the labels during pre-training. The proposed multimodal self-supervised learning yields representations with improved classification performance for both modalities. The concomitant rich and flexible unsupervised deep learning framework captures complex multimodal relationships and provides predictive performance that meets or exceeds that of a more narrow supervised classification analysis. We present elaborate quantitative evidence of how this framework can significantly advance our search for missing links in complex brain disorders.

READ FULL TEXT

page 11

page 16

page 18

research
12/25/2020

On self-supervised multi-modal representation learning: An application to Alzheimer's disease

Introspection of deep supervised predictive models trained on functional...
research
12/25/2020

Taxonomy of multimodal self-supervised representation learning

Sensory input from multiple sources is crucial for robust and coherent h...
research
11/15/2022

Cross-Domain Self-Supervised Deep Learning for Robust Alzheimer's Disease Progression Modeling

Developing successful artificial intelligence systems in practice depend...
research
04/04/2018

Improving Classification Rate of Schizophrenia Using a Multimodal Multi-Layer Perceptron Model with Structural and Functional MR

The wide variety of brain imaging technologies allows us to exploit info...
research
04/16/2023

Multimodal Representation Learning of Cardiovascular Magnetic Resonance Imaging

Self-supervised learning is crucial for clinical imaging applications, g...
research
10/16/2021

A Heterogeneous Graph Based Framework for Multimodal Neuroimaging Fusion Learning

Here, we present a Heterogeneous Graph neural network for Multimodal neu...

Please sign up or login with your details

Forgot password? Click here to reset