Learning Missing Modal Electronic Health Records with Unified Multi-modal Data Embedding and Modality-Aware Attention

05/04/2023
by   Kwanhyung Lee, et al.
0

Electronic Health Record (EHR) provides abundant information through various modalities. However, learning multi-modal EHR is currently facing two major challenges, namely, 1) data embedding and 2) cases with missing modality. A lack of shared embedding function across modalities can discard the temporal relationship between different EHR modalities. On the other hand, most EHR studies are limited to relying only on EHR Times-series, and therefore, missing modality in EHR has not been well-explored. Therefore, in this study, we introduce a Unified Multi-modal Set Embedding (UMSE) and Modality-Aware Attention (MAA) with Skip Bottleneck (SB). UMSE treats all EHR modalities without a separate imputation module or error-prone carry-forward, whereas MAA with SB learns missing modal EHR with effective modality-aware attention. Our model outperforms other baseline models in mortality, vasopressor need, and intubation need prediction with the MIMIC-IV dataset.

READ FULL TEXT
research
04/11/2023

Unified Multi-Modal Image Synthesis for Missing Modality Imputation

Multi-modal medical images provide complementary soft-tissue characteris...
research
11/12/2020

Learning Inter-Modal Correspondence and Phenotypes from Multi-Modal Electronic Health Records

Non-negative tensor factorization has been shown a practical solution to...
research
01/23/2019

Interpretable Neural Networks for Predicting Mortality Risk using Multi-modal Electronic Health Records

We present an interpretable neural network for predicting an important c...
research
11/04/2021

Towards dynamic multi-modal phenotyping using chest radiographs and physiological data

The healthcare domain is characterized by heterogeneous data modalities,...
research
09/03/2019

Multi-level Attention network using text, audio and video for Depression Prediction

Depression has been the leading cause of mental-health illness worldwide...
research
12/31/2014

ModDrop: adaptive multi-modal gesture recognition

We present a method for gesture detection and localisation based on mult...
research
10/16/2012

Factorized Multi-Modal Topic Model

Multi-modal data collections, such as corpora of paired images and text ...

Please sign up or login with your details

Forgot password? Click here to reset