MedFuse: Multi-modal fusion with clinical time-series data and chest X-ray images

07/14/2022
by   Nasir Hayat, et al.
0

Multi-modal fusion approaches aim to integrate information from different data sources. Unlike natural datasets, such as in audio-visual applications, where samples consist of "paired" modalities, data in healthcare is often collected asynchronously. Hence, requiring the presence of all modalities for a given sample is not realistic for clinical tasks and significantly limits the size of the dataset during training. In this paper, we propose MedFuse, a conceptually simple yet promising LSTM-based fusion module that can accommodate uni-modal as well as multi-modal input. We evaluate the fusion method and introduce new benchmark results for in-hospital mortality prediction and phenotype classification, using clinical time-series data in the MIMIC-IV dataset and corresponding chest X-ray images in MIMIC-CXR. Compared to more complex multi-modal fusion strategies, MedFuse provides a performance improvement by a large margin on the fully paired test set. It also remains robust across the partially paired test set containing samples with missing chest X-ray images. We release our code for reproducibility and to enable the evaluation of competing models in the future.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/13/2019

MEx: Multi-modal Exercises Dataset for Human Activity Recognition

MEx: Multi-modal Exercises Dataset is a multi-sensor, multi-modal datase...
research
11/04/2021

Towards dynamic multi-modal phenotyping using chest radiographs and physiological data

The healthcare domain is characterized by heterogeneous data modalities,...
research
11/13/2022

Early Diagnosis of Chronic Obstructive Pulmonary Disease from Chest X-Rays using Transfer Learning and Fusion Strategies

Chronic obstructive pulmonary disease (COPD) is one of the most common c...
research
02/26/2023

MDF-Net: Multimodal Dual-Fusion Network for Abnormality Detection using CXR Images and Clinical Data

This study aims to investigate the effects of including patients' clinic...
research
01/20/2023

DeepCOVID-Fuse: A Multi-modality Deep Learning Model Fusing Chest X-Radiographs and Clinical Variables to Predict COVID-19 Risk Levels

Propose: To present DeepCOVID-Fuse, a deep learning fusion model to pred...
research
05/02/2022

DeepGraviLens: a Multi-Modal Architecture for Classifying Gravitational Lensing Data

Gravitational lensing is the relativistic effect generated by massive bo...
research
03/17/2023

Hospital Length of Stay Prediction Based on Multi-modal Data towards Trustworthy Human-AI Collaboration in Radiomics

To what extent can the patient's length of stay in a hospital be predict...

Please sign up or login with your details

Forgot password? Click here to reset