Log In Sign Up

DeepMiner: Discovering Interpretable Representations for Mammogram Classification and Explanation

by   Jimmy Wu, et al.

We propose DeepMiner, a framework to discover interpretable representations in deep neural networks and to build explanations for medical predictions. By probing convolutional neural networks (CNNs) trained to classify cancer in mammograms, we show that many individual units in the final convolutional layer of a CNN respond strongly to diseased tissue concepts specified by the BI-RADS lexicon. After expert annotation of the interpretable units, our proposed method is able to generate explanations for CNN mammogram classification that are correlated with ground truth radiology reports on the DDSM dataset. We show that DeepMiner not only enables better understanding of the nuances of CNN classification decisions, but also possibly discovers new visual knowledge relevant to medical diagnosis.


page 3

page 4

page 5

page 6

page 7


Expert identification of visual primitives used by CNNs during mammogram classification

This work interprets the internal representations of deep neural network...

Cognitive Explainers of Graph Neural Networks Based on Medical Concepts

Although deep neural networks (DNN) have achieved state-of-the-art perfo...

InterpNET: Neural Introspection for Interpretable Deep Learning

Humans are able to explain their reasoning. On the contrary, deep neural...

A Deep Learning Interpretable Classifier for Diabetic Retinopathy Disease Grading

Deep neural network models have been proven to be very successful in ima...

Provable concept learning for interpretable predictions using variational inference

In safety critical applications, practitioners are reluctant to trust ne...

Understanding the Role of Individual Units in a Deep Neural Network

Deep neural networks excel at finding hierarchical representations that ...