Computational analysis of pathological image enables interpretable prediction for microsatellite instability

by   Jin Zhu, et al.

Microsatellite instability (MSI) is associated with several tumor types and its status has become increasingly vital in guiding patient treatment decisions. However, in clinical practice, distinguishing MSI from its counterpart is challenging since the diagnosis of MSI requires additional genetic or immunohistochemical tests. In this study, interpretable pathological image analysis strategies are established to help medical experts to automatically identify MSI. The strategies only require ubiquitous Haematoxylin and eosin-stained whole-slide images and can achieve decent performance in the three cohorts collected from The Cancer Genome Atlas. The strategies provide interpretability in two aspects. On the one hand, the image-level interpretability is achieved by generating localization heat maps of important regions based on the deep learning network; on the other hand, the feature-level interpretability is attained through feature importance and pathological feature interaction analysis. More interestingly, both from the image-level and feature-level interpretability, color features and texture characteristics are shown to contribute the most to the MSI predictions. Therefore, the classification models under the proposed strategies can not only serve as an efficient tool for predicting the MSI status of patients, but also provide more insights to pathologists with clinical understanding.


page 15

page 16

page 17

page 19

page 21

page 22

page 23

page 24


Stratification of carotid atheromatous plaque using interpretable deep learning methods on B-mode ultrasound images

Carotid atherosclerosis is the major cause of ischemic stroke resulting ...

ISeeU2: Visually Interpretable ICU mortality prediction using deep learning and free-text medical notes

Accurate mortality prediction allows Intensive Care Units (ICUs) to adeq...

AdaCare: Explainable Clinical Health Status Representation Learning via Scale-Adaptive Feature Extraction and Recalibration

Deep learning-based health status representation learning and clinical p...

ICADx: Interpretable computer aided diagnosis of breast masses

In this study, a novel computer aided diagnosis (CADx) framework is devi...

A Personalized Diagnostic Generation Framework Based on Multi-source Heterogeneous Data

Personalized diagnoses have not been possible due to sear amount of data...

Additive MIL: Intrinsic Interpretability for Pathology

Multiple Instance Learning (MIL) has been widely applied in pathology to...

Rationalizing Medical Relation Prediction from Corpus-level Statistics

Nowadays, the interpretability of machine learning models is becoming in...