An Interpretable Algorithm for Uveal Melanoma Subtyping from Whole Slide Cytology Images

08/13/2021
by   Haomin Chen, et al.
14

Algorithmic decision support is rapidly becoming a staple of personalized medicine, especially for high-stakes recommendations in which access to certain information can drastically alter the course of treatment, and thus, patient outcome; a prominent example is radiomics for cancer subtyping. Because in these scenarios the stakes are high, it is desirable for decision systems to not only provide recommendations but supply transparent reasoning in support thereof. For learning-based systems, this can be achieved through an interpretable design of the inference pipeline. Herein we describe an automated yet interpretable system for uveal melanoma subtyping with digital cytology images from fine needle aspiration biopsies. Our method embeds every automatically segmented cell of a candidate cytology image as a point in a 2D manifold defined by many representative slides, which enables reasoning about the cell-level composition of the tissue sample, paving the way for interpretable subtyping of the biopsy. Finally, a rule-based slide-level classification algorithm is trained on the partitions of the circularly distorted 2D manifold. This process results in a simple rule set that is evaluated automatically but highly transparent for human verification. On our in house cytology dataset of 88 uveal melanoma patients, the proposed method achieves an accuracy of 87.5 approaches, including deep "black box" models. The method comes with a user interface to facilitate interaction with cell-level content, which may offer additional insights for pathological assessment.

READ FULL TEXT

page 2

page 3

page 5

page 6

research
02/03/2022

Separating Rule Discovery and Global Solution Composition in a Learning Classifier System

The utilization of digital agents to support crucial decision making is ...
research
05/21/2021

An Interpretable Approach to Automated Severity Scoring in Pelvic Trauma

Pelvic ring disruptions result from blunt injury mechanisms and are ofte...
research
07/12/2022

Investigating the Impact of Independent Rule Fitnesses in a Learning Classifier System

Achieving at least some level of explainability requires complex analyse...
research
08/12/2022

RuDi: Explaining Behavior Sequence Models by Automatic Statistics Generation and Rule Distillation

Risk scoring systems have been widely deployed in many applications, whi...
research
03/23/2021

IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography

Interpretability in machine learning models is important in high-stakes ...
research
10/23/2022

Learning to Advise Humans By Leveraging Algorithm Discretion

Expert decision-makers (DMs) in high-stakes AI-advised (AIDeT) settings ...

Please sign up or login with your details

Forgot password? Click here to reset