M3d-CAM: A PyTorch library to generate 3D data attention maps for medical deep learning

07/01/2020
by   Karol Gotkowski, et al.
0

M3d-CAM is an easy to use library for generating attention maps of CNN-based PyTorch models improving the interpretability of model predictions for humans. The attention maps can be generated with multiple methods like Guided Backpropagation, Grad-CAM, Guided Grad-CAM and Grad-CAM++. These attention maps visualize the regions in the input data that influenced the model prediction the most at a certain layer. Furthermore, M3d-CAM supports 2D and 3D data for the task of classification as well as for segmentation. A key feature is also that in most cases only a single line of code is required for generating attention maps for a model making M3d-CAM basically plug and play.

READ FULL TEXT

page 1

page 3

page 4

research
07/27/2022

Object-ABN: Learning to Generate Sharp Attention Maps for Action Recognition

In this paper we propose an extension of the Attention Branch Network (A...
research
12/22/2021

Comparing radiologists' gaze and saliency maps generated by interpretability methods for chest x-rays

The interpretability of medical image analysis models is considered a ke...
research
05/03/2021

S3Net: A Single Stream Structure for Depth Guided Image Relighting

Depth guided any-to-any image relighting aims to generate a relit image ...
research
08/23/2019

A comparative study for interpreting deep learning prediction of the Parkinson's disease diagnosis from SPECT imaging

The application of deep learning to single-photon emission computed tomo...
research
08/01/2016

Top-down Neural Attention by Excitation Backprop

We aim to model the top-down attention of a Convolutional Neural Network...
research
12/02/2020

Attention-gating for improved radio galaxy classification

In this work we introduce attention as a state of the art mechanism for ...
research
12/20/2014

Classifier with Hierarchical Topographical Maps as Internal Representation

In this study we want to connect our previously proposed context-relevan...

Please sign up or login with your details

Forgot password? Click here to reset