Clusters in Explanation Space: Inferring disease subtypes from model explanations

12/18/2019
by   Marc-Andre Schulz, et al.
0

Identification of disease subtypes and corresponding biomarkers can substantially improve clinical diagnosis and treatment selection. Discovering these subtypes in noisy, high dimensional biomedical data is often impossible for humans and challenging for machines. We introduce a new approach to facilitate the discovery of disease subtypes: Instead of analyzing the original data, we train a diagnostic classifier (healthy vs. diseased) and extract instance-wise explanations for the classifier's decisions. The distribution of instances in the explanation space of our diagnostic classifier amplifies the different reasons for belonging to the same class - resulting in a representation that is uniquely useful for discovering latent subtypes. We compare our ability to recover subtypes via cluster analysis on model explanations to classical cluster analysis on the original data. In multiple datasets with known ground-truth subclasses, most compellingly on UK Biobank brain imaging data and transcriptome data from the Cancer Genome Atlas, we show that cluster analysis on model explanations substantially outperforms the classical approach. While we believe clustering in explanation space to be particularly valuable for inferring disease subtypes, the method is more general and applicable to any kind of sub-type identification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/08/2021

Diagnostics-Guided Explanation Generation

Explanations shed light on a machine learning model's rationales and can...
research
04/10/2023

Explanation Strategies for Image Classification in Humans vs. Current Explainable AI

Explainable AI (XAI) methods provide explanations of AI models, but our ...
research
01/04/2022

ExAID: A Multimodal Explanation Framework for Computer-Aided Diagnosis of Skin Lesions

One principal impediment in the successful deployment of AI-based Comput...
research
05/24/2021

Deep Descriptive Clustering

Recent work on explainable clustering allows describing clusters when th...
research
08/21/2021

Learn-Explain-Reinforce: Counterfactual Reasoning and Its Guidance to Reinforce an Alzheimer's Disease Diagnosis Model

Existing studies on disease diagnostic models focus either on diagnostic...
research
11/20/2018

An interpretable multiple kernel learning approach for the discovery of integrative cancer subtypes

Due to the complexity of cancer, clustering algorithms have been used to...
research
07/04/2021

Class Introspection: A Novel Technique for Detecting Unlabeled Subclasses by Leveraging Classifier Explainability Methods

Detecting latent structure within a dataset is a crucial step in perform...

Please sign up or login with your details

Forgot password? Click here to reset