Evidence-aware multi-modal data fusion and its application to total knee replacement prediction

03/24/2023
by   Xinwen Liu, et al.
0

Deep neural networks have been widely studied for predicting a medical condition, such as total knee replacement (TKR). It has shown that data of different modalities, such as imaging data, clinical variables and demographic information, provide complementary information and thus can improve the prediction accuracy together. However, the data sources of various modalities may not always be of high quality, and each modality may have only partial information of medical condition. Thus, predictions from different modalities can be opposite, and the final prediction may fail in the presence of such a conflict. Therefore, it is important to consider the reliability of each source data and the prediction output when making a final decision. In this paper, we propose an evidence-aware multi-modal data fusion framework based on the Dempster-Shafer theory (DST). The backbone models contain an image branch, a non-image branch and a fusion branch. For each branch, there is an evidence network that takes the extracted features as input and outputs an evidence score, which is designed to represent the reliability of the output from the current branch. The output probabilities along with the evidence scores from multiple branches are combined with the Dempster's combination rule to make a final prediction. Experimental results on the public OA initiative (OAI) dataset for the TKR prediction task show the superiority of the proposed fusion strategy on various backbone models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/22/2022

Uncertainty-aware Multi-modal Learning via Cross-modal Random Network Prediction

Multi-modal learning focuses on training models by equally combining mul...
research
09/04/2017

Multi-modal Conditional Attention Fusion for Dimensional Emotion Prediction

Continuous dimensional emotion prediction is a challenging task where th...
research
10/04/2017

Constructing multi-modality and multi-classifier radiomics predictive models through reliable classifier fusion

Radiomics aims to extract and analyze large numbers of quantitative feat...
research
06/23/2022

Evidence fusion with contextual discounting for multi-modality medical image segmentation

As information sources are usually imperfect, it is necessary to take in...
research
07/29/2020

Difficulty-aware Glaucoma Classification with Multi-Rater Consensus Modeling

Medical images are generally labeled by multiple experts before the fina...
research
05/02/2018

Dynamically Improving Branch Prediction Accuracy Between Contexts

Branch prediction is a standard feature in most processors, significantl...
research
10/25/2022

Fusing Modalities by Multiplexed Graph Neural Networks for Outcome Prediction in Tuberculosis

In a complex disease such as tuberculosis, the evidence for the disease ...

Please sign up or login with your details

Forgot password? Click here to reset