That's the Wrong Lung! Evaluating and Improving the Interpretability of Unsupervised Multimodal Encoders for Medical Data

10/12/2022
by   Denis Jered McInerney, et al.
0

Pretraining multimodal models on Electronic Health Records (EHRs) provides a means of learning representations that can transfer to downstream tasks with minimal supervision. Recent multimodal models induce soft local alignments between image regions and sentences. This is of particular interest in the medical domain, where alignments might highlight regions in an image relevant to specific phenomena described in free-text. While past work has suggested that attention "heatmaps" can be interpreted in this manner, there has been little evaluation of such alignments. We compare alignments from a state-of-the-art multimodal (image and text) model for EHR with human annotations that link image regions to sentences. Our main finding is that the text has an often weak or unintuitive influence on attention; alignments do not consistently reflect basic anatomical information. Moreover, synthetic modifications – such as substituting "left" for "right" – do not substantially influence highlights. Simple techniques such as allowing the model to opt out of attending to the image and few-shot finetuning show promise in terms of their ability to improve alignments with very little or no supervision.

READ FULL TEXT

page 2

page 3

page 8

page 11

page 12

page 13

page 17

page 21

research
10/04/2022

ASIF: Coupled Data Turns Unimodal Models to Multimodal Without Training

Aligning the visual and language spaces requires to train deep neural ne...
research
06/01/2023

Exploring the Versatility of Zero-Shot CLIP for Interstitial Lung Disease Classification

Interstitial lung diseases (ILD) present diagnostic challenges due to th...
research
02/09/2022

MMLN: Leveraging Domain Knowledge for Multimodal Diagnosis

Recent studies show that deep learning models achieve good performance o...
research
04/26/2023

Towards Medical Artificial General Intelligence via Knowledge-Enhanced Multimodal Pretraining

Medical artificial general intelligence (MAGI) enables one foundation mo...
research
04/23/2022

Training and challenging models for text-guided fashion image retrieval

Retrieving relevant images from a catalog based on a query image togethe...
research
02/05/2021

RpBERT: A Text-image Relation Propagation-based BERT Model for Multimodal NER

Recently multimodal named entity recognition (MNER) has utilized images ...
research
04/30/2022

Multimodal Representation Learning With Text and Images

In recent years, multimodal AI has seen an upward trend as researchers a...

Please sign up or login with your details

Forgot password? Click here to reset