Additive MIL: Intrinsic Interpretability for Pathology

06/03/2022
by   Syed Ashar Javed, et al.
0

Multiple Instance Learning (MIL) has been widely applied in pathology towards solving critical problems such as automating cancer diagnosis and grading, predicting patient prognosis, and therapy response. Deploying these models in a clinical setting requires careful inspection of these black boxes during development and deployment to identify failures and maintain physician trust. In this work, we propose a simple formulation of MIL models, which enables interpretability while maintaining similar predictive performance. Our Additive MIL models enable spatial credit assignment such that the contribution of each region in the image can be exactly computed and visualized. We show that our spatial credit assignment coincides with regions used by pathologists during diagnosis and improves upon classical attention heatmaps from attention MIL models. We show that any existing MIL model can be made additive with a simple change in function composition. We also show how these models can debug model failures, identify spurious features, and highlight class-wise regions of interest, enabling their use in high-stakes environments such as clinical decision-making.

READ FULL TEXT

page 8

page 9

page 16

page 17

page 18

research
09/07/2023

Beyond attention: deriving biologically interpretable insights from weakly-supervised multiple-instance learning models

Recent advances in attention-based multiple instance learning (MIL) have...
research
12/22/2022

Towards Causal Credit Assignment

Adequately assigning credit to actions for future outcomes based on thei...
research
04/29/2021

Lung Cancer Diagnosis Using Deep Attention Based on Multiple Instance Learning and Radiomics

Early diagnosis of lung cancer is a key intervention for the treatment o...
research
06/21/2023

Investigating Poor Performance Regions of Black Boxes: LIME-based Exploration in Sepsis Detection

Interpreting machine learning models remains a challenge, hindering thei...
research
10/22/2018

Interpretability is Harder in the Multiclass Setting: Axiomatic Interpretability for Multiclass Additive Models

Generalized additive models (GAMs) are favored in many regression and bi...
research
10/07/2020

Computational analysis of pathological image enables interpretable prediction for microsatellite instability

Microsatellite instability (MSI) is associated with several tumor types ...
research
02/25/2020

Liquid Scorecards

Traditional credit scorecards are generalized additive models (GAMs) wit...

Please sign up or login with your details

Forgot password? Click here to reset