DeepAI
Log In Sign Up

DeepMiner: Discovering Interpretable Representations for Mammogram Classification and Explanation

05/31/2018
by   Jimmy Wu, et al.
2

We propose DeepMiner, a framework to discover interpretable representations in deep neural networks and to build explanations for medical predictions. By probing convolutional neural networks (CNNs) trained to classify cancer in mammograms, we show that many individual units in the final convolutional layer of a CNN respond strongly to diseased tissue concepts specified by the BI-RADS lexicon. After expert annotation of the interpretable units, our proposed method is able to generate explanations for CNN mammogram classification that are correlated with ground truth radiology reports on the DDSM dataset. We show that DeepMiner not only enables better understanding of the nuances of CNN classification decisions, but also possibly discovers new visual knowledge relevant to medical diagnosis.

READ FULL TEXT

page 3

page 4

page 5

page 6

page 7

03/13/2018

Expert identification of visual primitives used by CNNs during mammogram classification

This work interprets the internal representations of deep neural network...
01/19/2022

Cognitive Explainers of Graph Neural Networks Based on Medical Concepts

Although deep neural networks (DNN) have achieved state-of-the-art perfo...
10/26/2017

InterpNET: Neural Introspection for Interpretable Deep Learning

Humans are able to explain their reasoning. On the contrary, deep neural...
12/21/2017

A Deep Learning Interpretable Classifier for Diabetic Retinopathy Disease Grading

Deep neural network models have been proven to be very successful in ima...
04/01/2022

Provable concept learning for interpretable predictions using variational inference

In safety critical applications, practitioners are reluctant to trust ne...
09/10/2020

Understanding the Role of Individual Units in a Deep Neural Network

Deep neural networks excel at finding hierarchical representations that ...