Deep Learning Generates Synthetic Cancer Histology for Explainability and Education

11/12/2022
by   James M Dolezal, et al.
0

Artificial intelligence (AI) methods including deep neural networks can provide rapid molecular classification of tumors from routine histology with accuracy that can match or exceed human pathologists. Discerning how neural networks make their predictions remains a significant challenge, but explainability tools can help provide insights into what models have learned when corresponding histologic features are poorly understood. Conditional generative adversarial networks (cGANs) are AI models that generate synthetic images and illustrate subtle differences between image classes. Here, we describe the use of a cGAN for explaining models trained to classify molecularly-subtyped tumors, exposing associated histologic features. We leverage cGANs to create class- and layer-blending visualizations to improve understanding of subtype morphology. Finally, we demonstrate the potential use of synthetic histology for augmenting pathology trainee education and show that clear, intuitive cGAN visualizations can reinforce and improve human understanding of histologic manifestations of tumor biology

READ FULL TEXT

page 9

page 10

research
09/27/2022

A Morphology Focused Diffusion Probabilistic Model for Synthesis of Histopathology Images

Visual microscopic study of diseased tissue by pathologists has been the...
research
11/02/2021

Instructive artificial intelligence (AI) for human training, assistance, and explainability

We propose a novel approach to explainable AI (XAI) based on the concept...
research
07/26/2022

From Interpretable Filters to Predictions of Convolutional Neural Networks with Explainable Artificial Intelligence

Convolutional neural networks (CNN) are known for their excellent featur...
research
10/20/2021

Artificial Intelligence-Based Detection, Classification and Prediction/Prognosis in PET Imaging: Towards Radiophenomics

Artificial intelligence (AI) techniques have significant potential to en...
research
10/11/2022

On Explainability in AI-Solutions: A Cross-Domain Survey

Artificial Intelligence (AI) increasingly shows its potential to outperf...
research
08/07/2023

Evaluating and Explaining Large Language Models for Code Using Syntactic Structures

Large Language Models (LLMs) for code are a family of high-parameter, tr...
research
10/26/2022

Synthetic Tumors Make AI Segment Tumors Better

We develop a novel strategy to generate synthetic tumors. Unlike existin...

Please sign up or login with your details

Forgot password? Click here to reset