Towards Trustable Skin Cancer Diagnosis via Rewriting Model's Decision

by   Siyuan Yan, et al.

Deep neural networks have demonstrated promising performance on image recognition tasks. However, they may heavily rely on confounding factors, using irrelevant artifacts or bias within the dataset as the cue to improve performance. When a model performs decision-making based on these spurious correlations, it can become untrustable and lead to catastrophic outcomes when deployed in the real-world scene. In this paper, we explore and try to solve this problem in the context of skin cancer diagnosis. We introduce a human-in-the-loop framework in the model training process such that users can observe and correct the model's decision logic when confounding behaviors happen. Specifically, our method can automatically discover confounding factors by analyzing the co-occurrence behavior of the samples. It is capable of learning confounding concepts using easily obtained concept exemplars. By mapping the black-box model's feature representation onto an explainable concept space, human users can interpret the concept and intervene via first order-logic instruction. We systematically evaluate our method on our newly crafted, well-controlled skin lesion dataset and several public skin lesion datasets. Experiments show that our method can effectively detect and remove confounding factors from datasets without any prior knowledge about the category distribution and does not require fully annotated concept labels. We also show that our method enables the model to focus on clinical-related concepts, improving the model's performance and trustworthiness during model inference.


page 2

page 6

page 14

page 16


Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion Diagnosis

Early detection of melanoma is crucial for preventing severe complicatio...

Exploring Advances in Transformers and CNN for Skin Lesion Diagnosis on Small Datasets

Skin cancer is one of the most common types of cancer in the world. Diff...

Test-Time Selection for Robust Skin Lesion Analysis

Skin lesion analysis models are biased by artifacts placed during image ...

Skin Lesion Analyser: An Efficient Seven-Way Multi-Class Skin Cancer Classification Using MobileNet

Skin cancer, a major form of cancer, is a critical public health problem...

Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models

State-of-the-art machine learning models often learn spurious correlatio...

Debiasing Concept Bottleneck Models with Instrumental Variables

Concept-based explanation approach is a popular model interpertability t...

Right for the Wrong Scientific Reasons: Revising Deep Networks by Interacting with their Explanations

Deep neural networks have shown excellent performances in many real-worl...

Please sign up or login with your details

Forgot password? Click here to reset