Improving Disease Classification Performance and Explainability of Deep Learning Models in Radiology with Heatmap Generators

06/28/2022
by   Akino Watanabe, et al.
9

As deep learning is widely used in the radiology field, the explainability of such models is increasingly becoming essential to gain clinicians' trust when using the models for diagnosis. In this research, three experiment sets were conducted with a U-Net architecture to improve the classification performance while enhancing the heatmaps corresponding to the model's focus through incorporating heatmap generators during training. All of the experiments used the dataset that contained chest radiographs, associated labels from one of the three conditions ("normal", "congestive heart failure (CHF)", and "pneumonia"), and numerical information regarding a radiologist's eye-gaze coordinates on the images. The paper (A. Karargyris and Moradi, 2021) that introduced this dataset developed a U-Net model, which was treated as the baseline model for this research, to show how the eye-gaze data can be used in multi-modal training for explainability improvement. To compare the classification performances, the 95 confidence intervals (CI) of the area under the receiver operating characteristic curve (AUC) were measured. The best method achieved an AUC of 0.913 (CI: 0.860-0.966). The greatest improvements were for the "pneumonia" and "CHF" classes, which the baseline model struggled most to classify, resulting in AUCs of 0.859 (CI: 0.732-0.957) and 0.962 (CI: 0.933-0.989), respectively. The proposed method's decoder was also able to produce probability masks that highlight the determining image parts in model classifications, similarly as the radiologist's eye-gaze data. Hence, this work showed that incorporating heatmap generators and eye-gaze information into training can simultaneously improve disease classification and provide explainable visuals that align well with how the radiologist viewed the chest radiographs when making diagnosis.

READ FULL TEXT

page 7

page 30

page 31

page 32

page 33

page 34

page 35

page 36

research
07/29/2022

Using Multi-modal Data for Improving Generalizability and Explainability of Disease Classification in Radiology

Traditional datasets for the radiological diagnosis tend to only provide...
research
10/03/2020

Creation and Validation of a Chest X-Ray Dataset with Eye-tracking and Report Dictation for AI Development

We developed a rich dataset of Chest X-Ray (CXR) images to assist invest...
research
09/15/2020

Creation and Validation of a Chest X-Ray Dataset with Eye-tracking and Report Dictation for AI Tool Development

We developed a rich dataset of Chest X-Ray (CXR) images to assist invest...
research
02/06/2023

Integrating Eye-Gaze Data into CXR DL Approaches: A Preliminary study

This paper proposes a novel multimodal DL architecture incorporating med...
research
11/22/2019

Direct Classification of Type 2 Diabetes From Retinal Fundus Images in a Population-based Sample From The Maastricht Study

Type 2 Diabetes (T2D) is a chronic metabolic disorder that can lead to b...
research
06/10/2021

Anatomy X-Net: A Semi-Supervised Anatomy Aware Convolutional Neural Network for Thoracic Disease Classification

Thoracic disease detection from chest radiographs using deep learning me...
research
12/27/2022

NEEDED: Introducing Hierarchical Transformer to Eye Diseases Diagnosis

With the development of natural language processing techniques(NLP), aut...

Please sign up or login with your details

Forgot password? Click here to reset