Using Multi-modal Data for Improving Generalizability and Explainability of Disease Classification in Radiology

07/29/2022
by   Pranav Agnihotri, et al.
0

Traditional datasets for the radiological diagnosis tend to only provide the radiology image alongside the radiology report. However, radiology reading as performed by radiologists is a complex process, and information such as the radiologist's eye-fixations over the course of the reading has the potential to be an invaluable data source to learn from. Nonetheless, the collection of such data is expensive and time-consuming. This leads to the question of whether such data is worth the investment to collect. This paper utilizes the recently published Eye-Gaze dataset to perform an exhaustive study on the impact on performance and explainability of deep learning (DL) classification in the face of varying levels of input features, namely: radiology images, radiology report text, and radiologist eye-gaze data. We find that the best classification performance of X-ray images is achieved with a combination of radiology report free-text and radiology image, with the eye-gaze data providing no performance boost. Nonetheless, eye-gaze data serving as secondary ground truth alongside the class label results in highly explainable models that generate better attention maps compared to models trained to do classification and attention map generation without eye-gaze data.

READ FULL TEXT

page 8

page 9

page 11

page 12

page 13

page 14

page 21

research
10/03/2020

Creation and Validation of a Chest X-Ray Dataset with Eye-tracking and Report Dictation for AI Development

We developed a rich dataset of Chest X-Ray (CXR) images to assist invest...
research
06/28/2022

Improving Disease Classification Performance and Explainability of Deep Learning Models in Radiology with Heatmap Generators

As deep learning is widely used in the radiology field, the explainabili...
research
06/23/2020

Classifying Referential and Non-referential It Using Gaze

When processing a text, humans and machines must disambiguate between di...
research
09/15/2020

Creation and Validation of a Chest X-Ray Dataset with Eye-tracking and Report Dictation for AI Tool Development

We developed a rich dataset of Chest X-Ray (CXR) images to assist invest...
research
04/06/2022

Follow My Eye: Using Gaze to Supervise Computer-Aided Diagnosis

When deep neural network (DNN) was first introduced to the medical image...
research
02/06/2023

Integrating Eye-Gaze Data into CXR DL Approaches: A Preliminary study

This paper proposes a novel multimodal DL architecture incorporating med...
research
03/20/2017

Object category understanding via eye fixations on freehand sketches

The study of eye gaze fixations on photographic images is an active rese...

Please sign up or login with your details

Forgot password? Click here to reset