IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography

03/23/2021
by   Alina Jade Barnett, et al.
30

Interpretability in machine learning models is important in high-stakes decisions, such as whether to order a biopsy based on a mammographic exam. Mammography poses important challenges that are not present in other computer vision tasks: datasets are small, confounding information is present, and it can be difficult even for a radiologist to decide between watchful waiting and biopsy based on a mammogram alone. In this work, we present a framework for interpretable machine learning-based mammography. In addition to predicting whether a lesion is malignant or benign, our work aims to follow the reasoning processes of radiologists in detecting clinically relevant semantic features of each image, such as the characteristics of the mass margins. The framework includes a novel interpretable neural network algorithm that uses case-based reasoning for mammography. Our algorithm can incorporate a combination of data with whole image labelling and data with pixel-wise annotations, leading to better accuracy and interpretability even with a small number of images. Our interpretable models are able to highlight the classification-relevant parts of the image, whereas other methods highlight healthy tissue and confounding information. Our models are decision aids, rather than decision makers, aimed at better overall human-machine collaboration. We do not observe a loss in mass margin classification accuracy over a black box neural network trained on the same data.

READ FULL TEXT

page 8

page 15

page 25

page 26

page 27

page 28

page 29

page 30

research
07/12/2021

Interpretable Mammographic Image Classification using Cased-Based Reasoning and Deep Learning

When we deploy machine learning models in high-stakes medical settings, ...
research
06/25/2018

Why Interpretability in Machine Learning? An Answer Using Distributed Detection and Data Fusion Theory

As artificial intelligence is increasingly affecting all parts of societ...
research
12/01/2022

Implicit Mixture of Interpretable Experts for Global and Local Interpretability

We investigate the feasibility of using mixtures of interpretable expert...
research
12/10/2019

Deep Relevance Regularization: Interpretable and Robust Tumor Typing of Imaging Mass Spectrometry Data

Neural networks have recently been established as a viable classificatio...
research
04/19/2020

A Biologically Interpretable Two-stage Deep Neural Network (BIT-DNN) For Hyperspectral Imagery Classification

Spectral-spatial based deep learning models have recently proven to be e...
research
11/29/2021

Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable Prototypes

Machine learning has been widely adopted in many domains, including high...
research
08/13/2021

An Interpretable Algorithm for Uveal Melanoma Subtyping from Whole Slide Cytology Images

Algorithmic decision support is rapidly becoming a staple of personalize...

Please sign up or login with your details

Forgot password? Click here to reset