A Multi-modal Fusion Framework Based on Multi-task Correlation Learning for Cancer Prognosis Prediction

01/22/2022
by   Kaiwen Tan, et al.
0

Morphological attributes from histopathological images and molecular profiles from genomic data are important information to drive diagnosis, prognosis, and therapy of cancers. By integrating these heterogeneous but complementary data, many multi-modal methods are proposed to study the complex mechanisms of cancers, and most of them achieve comparable or better results from previous single-modal methods. However, these multi-modal methods are restricted to a single task (e.g., survival analysis or grade classification), and thus neglect the correlation between different tasks. In this study, we present a multi-modal fusion framework based on multi-task correlation learning (MultiCoFusion) for survival analysis and cancer grade classification, which combines the power of multiple modalities and multiple tasks. Specifically, a pre-trained ResNet-152 and a sparse graph convolutional network (SGCN) are used to learn the representations of histopathological images and mRNA expression data respectively. Then these representations are fused by a fully connected neural network (FCNN), which is also a multi-task shared network. Finally, the results of survival analysis and cancer grade classification output simultaneously. The framework is trained by an alternate scheme. We systematically evaluate our framework using glioma datasets from The Cancer Genome Atlas (TCGA). Results demonstrate that MultiCoFusion learns better representations than traditional feature extraction methods. With the help of multi-task alternating learning, even simple multi-modal concatenation can achieve better performance than other deep learning and traditional methods. Multi-task learning can improve the performance of multiple tasks not just one of them, and it is effective in both single-modal and multi-modal data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/10/2020

Deep-CR MTLR: a Multi-Modal Approach for Cancer Survival Prediction with Competing Risks

Accurate survival prediction is crucial for development of precision can...
research
08/28/2021

AMMASurv: Asymmetrical Multi-Modal Attention for Accurate Survival Analysis with Whole Slide Images and Gene Expression Data

The use of multi-modal data such as the combination of whole slide image...
research
04/08/2019

Large Margin Multi-modal Multi-task Feature Extraction for Image Classification

The features used in many image analysis-based applications are frequent...
research
06/14/2022

Codec at SemEval-2022 Task 5: Multi-Modal Multi-Transformer Misogynous Meme Classification Framework

In this paper we describe our work towards building a generic framework ...
research
12/18/2017

Multi-modal Face Pose Estimation with Multi-task Manifold Deep Learning

Human face pose estimation aims at estimating the gazing direction or he...
research
10/26/2017

Deep Multi-Modal Classification of Intraductal Papillary Mucinous Neoplasms (IPMN) with Canonical Correlation Analysis

Pancreatic cancer has the poorest prognosis among all cancer types. Intr...
research
04/02/2022

Ad Creative Discontinuation Prediction with Multi-Modal Multi-Task Neural Survival Networks

Discontinuing ad creatives at an appropriate time is one of the most imp...

Please sign up or login with your details

Forgot password? Click here to reset