Multi-modal Multi-kernel Graph Learning for Autism Prediction and Biomarker Discovery

03/03/2023
by   Junbin Mao, et al.
0

Multi-modal integration and classification based on graph learning is among the most challenging obstacles in disease prediction due to its complexity. Several recent works on the basis of attentional mechanisms have been proposed to disentangle the problem of multi-modal integration. However, there are certain limitations to these techniques. Primarily, these works focus on explicitly integrating at the feature level using weight scores, which cannot effectively address the negative impact between modalities. Next, a majority of them utilize single-sized filters to extract graph features, ignoring the heterogeneous information over graphs. To overcome these drawbacks, we propose MMKGL (Multi-modal Multi-Kernel Graph Learning). For the problem of negative impact between modalities, we use the multi-modal graph embedding module to construct a multi-modal graph. Different from the traditional manual construction of static graphs, a separate graph is generated for each modality by graph adaptive learning, where a function graph and a supervision graph are introduced for optimiztion during the multi-graph fusion embedding process. We then apply the multi-kernel graph learning module to extract heterogeneous information from the multi-modal graph. The information in the multi-modal graph at different levels is aggregated by convolutional kernels with different receptive field sizes, followed by generating a cross-kernel discovery tensor for disease prediction. Our method is evaluated on the benchmark Autism Brain Imaging Data Exchange (ABIDE) dataset and outperforms the state-of-the-art methods. In addition, discriminative brain regions associated with autism are identified by our model, providing guidance for the study of autism pathology.

READ FULL TEXT

page 1

page 4

page 8

page 9

research
05/08/2019

Multi-modal Graph Fusion for Inductive Disease Classification in Incomplete Datasets

Clinical diagnostic decision making and population-based studies often r...
research
08/31/2022

NestedFormer: Nested Modality-Aware Transformer for Brain Tumor Segmentation

Multi-modal MR imaging is routinely used in clinical practice to diagnos...
research
08/30/2010

Learning Multi-modal Similarity

In many applications involving multi-media data, the definition of simil...
research
05/27/2019

Multi-Modal Graph Interaction for Multi-Graph Convolution Network in Urban Spatiotemporal Forecasting

Graph convolution network based approaches have been recently used to mo...
research
10/25/2022

Multi-modal Dynamic Graph Network: Coupling Structural and Functional Connectome for Disease Diagnosis and Classification

Multi-modal neuroimaging technology has greatlly facilitated the efficie...
research
03/16/2023

Multi-modal Differentiable Unsupervised Feature Selection

Multi-modal high throughput biological data presents a great scientific ...
research
08/27/2023

Unified and Dynamic Graph for Temporal Character Grouping in Long Videos

Video temporal character grouping locates appearing moments of major cha...

Please sign up or login with your details

Forgot password? Click here to reset