Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion Diagnosis

04/10/2023
by   Cristiano Patrício, et al.
0

Early detection of melanoma is crucial for preventing severe complications and increasing the chances of successful treatment. Existing deep learning approaches for melanoma skin lesion diagnosis are deemed black-box models, as they omit the rationale behind the model prediction, compromising the trustworthiness and acceptability of these diagnostic methods. Attempts to provide concept-based explanations are based on post-hoc approaches, which depend on an additional model to derive interpretations. In this paper, we propose an inherently interpretable framework to improve the interpretability of concept-based models by incorporating a hard attention mechanism and a coherence loss term to assure the visual coherence of concept activations by the concept encoder, without requiring the supervision of additional annotations. The proposed framework explains its decision in terms of human-interpretable concepts and their respective contribution to the final prediction, as well as a visual interpretation of the locations where the concept is present in the image. Experiments on skin image datasets demonstrate that our method outperforms existing black-box and concept-based models for skin lesion classification.

READ FULL TEXT

page 1

page 4

page 8

research
12/30/2020

SkiNet: A Deep Learning Solution for Skin Lesion Diagnosis with Uncertainty Estimation and Explainability

Skin cancer is considered to be the most common human malignancy. Around...
research
03/02/2023

Towards Trustable Skin Cancer Diagnosis via Rewriting Model's Decision

Deep neural networks have demonstrated promising performance on image re...
research
10/30/2022

Attention Swin U-Net: Cross-Contextual Attention Mechanism for Skin Lesion Segmentation

Melanoma is caused by the abnormal growth of melanocytes in human skin. ...
research
08/31/2021

PACE: Posthoc Architecture-Agnostic Concept Extractor for Explaining CNNs

Deep CNNs, though have achieved the state of the art performance in imag...
research
03/13/2023

Revisiting model self-interpretability in a decision-theoretic way for binary medical image classification

Interpretability is highly desired for deep neural network-based classif...
research
11/19/2019

Enhancing the Extraction of Interpretable Information for Ischemic Stroke Imaging from Deep Neural Networks

When artificial intelligence is used in the medical sector, interpretabi...
research
08/31/2022

Concept Gradient: Concept-based Interpretation Without Linear Assumption

Concept-based interpretations of black-box models are often more intuiti...

Please sign up or login with your details

Forgot password? Click here to reset