AMMASurv: Asymmetrical Multi-Modal Attention for Accurate Survival Analysis with Whole Slide Images and Gene Expression Data

08/28/2021
by   Ruoqi Wang, et al.
0

The use of multi-modal data such as the combination of whole slide images (WSIs) and gene expression data for survival analysis can lead to more accurate survival predictions. Previous multi-modal survival models are not able to efficiently excavate the intrinsic information within each modality. Moreover, despite experimental results show that WSIs provide more effective information than gene expression data, previous methods regard the information from different modalities as similarly important so they cannot flexibly utilize the potential connection between the modalities. To address the above problems, we propose a new asymmetrical multi-modal method, termed as AMMASurv. Specifically, we design an asymmetrical multi-modal attention mechanism (AMMA) in Transformer encoder for multi-modal data to enable a more flexible multi-modal information fusion for survival prediction. Different from previous works, AMMASurv can effectively utilize the intrinsic information within every modality and flexibly adapts to the modalities of different importance. Extensive experiments are conducted to validate the effectiveness of the proposed model. Encouraging results demonstrate the superiority of our method over other state-of-the-art methods.

READ FULL TEXT
research
09/30/2022

Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition

Human state recognition is a critical topic with pervasive and important...
research
08/02/2023

A vision transformer-based framework for knowledge transfer from multi-modal to mono-modal lymphoma subtyping models

Determining lymphoma subtypes is a crucial step for better patients trea...
research
01/22/2022

A Multi-modal Fusion Framework Based on Multi-task Correlation Learning for Cancer Prognosis Prediction

Morphological attributes from histopathological images and molecular pro...
research
09/04/2023

Generative-based Fusion Mechanism for Multi-Modal Tracking

Generative models (GMs) have received increasing research interest for t...
research
11/21/2022

TFormer: A throughout fusion transformer for multi-modal skin lesion diagnosis

Multi-modal skin lesion diagnosis (MSLD) has achieved remarkable success...
research
03/08/2021

Self-Augmented Multi-Modal Feature Embedding

Oftentimes, patterns can be represented through different modalities. Fo...
research
09/02/2022

Multi-Modal Experience Inspired AI Creation

AI creation, such as poem or lyrics generation, has attracted increasing...

Please sign up or login with your details

Forgot password? Click here to reset