Using generative AI to investigate medical imagery models and datasets

06/01/2023
by   Oran Lang, et al.
4

AI models have shown promise in many medical imaging tasks. However, our ability to explain what signals these models have learned is severely lacking. Explanations are needed in order to increase the trust in AI-based models, and could enable novel scientific discovery by uncovering signals in the data that are not yet known to experts. In this paper, we present a method for automatic visual explanations leveraging team-based expertise by generating hypotheses of what visual signals in the images are correlated with the task. We propose the following 4 steps: (i) Train a classifier to perform a given task (ii) Train a classifier guided StyleGAN-based image generator (StylEx) (iii) Automatically detect and visualize the top visual attributes that the classifier is sensitive towards (iv) Formulate hypotheses for the underlying mechanisms, to stimulate future research. Specifically, we present the discovered attributes to an interdisciplinary panel of experts so that hypotheses can account for social and structural determinants of health. We demonstrate results on eight prediction tasks across three medical imaging modalities: retinal fundus photographs, external eye photographs, and chest radiographs. We showcase examples of attributes that capture clinically known features, confounders that arise from factors beyond physiological mechanisms, and reveal a number of physiologically plausible novel attributes. Our approach has the potential to enable researchers to better understand, improve their assessment, and extract new knowledge from AI-based models. Importantly, we highlight that attributes generated by our framework can capture phenomena beyond physiology or pathophysiology, reflecting the real world nature of healthcare delivery and socio-cultural factors. Finally, we intend to release code to enable researchers to train their own StylEx models and analyze their predictive tasks.

READ FULL TEXT

page 13

page 14

page 15

page 27

page 29

page 31

page 33

page 34

research
06/08/2021

Explainable AI for medical imaging: Explaining pneumothorax diagnoses with Bayesian Teaching

Limited expert time is a key bottleneck in medical imaging. Due to advan...
research
07/09/2022

Explaining Chest X-ray Pathologies in Natural Language

Most deep learning algorithms lack explanations for their predictions, w...
research
04/27/2021

Explaining in Style: Training a GAN to explain a classifier in StyleSpace

Image classification models can depend on multiple different semantic at...
research
01/15/2020

CheXplain: Enabling Physicians to Explore and UnderstandData-Driven, AI-Enabled Medical Imaging Analysis

The recent development of data-driven AI promises to automate medical di...
research
07/21/2021

Reading Race: AI Recognises Patient's Racial Identity In Medical Images

Background: In medical imaging, prior studies have demonstrated disparat...
research
07/17/2023

Abductive Reasoning with the GPT-4 Language Model: Case studies from criminal investigation, medical practice, scientific research

This study evaluates the GPT-4 Large Language Model's abductive reasonin...
research
01/15/2015

Visual Analytics of Image-Centric Cohort Studies in Epidemiology

Epidemiology characterizes the influence of causes to disease and health...

Please sign up or login with your details

Forgot password? Click here to reset