Faithful and Plausible Explanations of Medical Code Predictions

04/16/2021
by   Zach Wood-Doughty, et al.
0

Machine learning models that offer excellent predictive performance often lack the interpretability necessary to support integrated human machine decision-making. In clinical medicine and other high-risk settings, domain experts may be unwilling to trust model predictions without explanations. Work in explainable AI must balance competing objectives along two different axes: 1) Explanations must balance faithfulness to the model's decision-making with their plausibility to a domain expert. 2) Domain experts desire local explanations of individual predictions and global explanations of behavior in aggregate. We propose to train a proxy model that mimics the behavior of the trained model and provides fine-grained control over these trade-offs. We evaluate our approach on the task of assigning ICD codes to clinical notes to demonstrate that explanations from the proxy model are faithful and replicate the trained model behavior.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/18/2023

Evaluation of Human-Understandability of Global Model Explanations using Decision Tree

In explainable artificial intelligence (XAI) research, the predominant f...
research
11/27/2020

Teaching the Machine to Explain Itself using Domain Knowledge

Machine Learning (ML) has been increasingly used to aid humans to make b...
research
12/04/2020

Challenging common interpretability assumptions in feature attribution explanations

As machine learning and algorithmic decision making systems are increasi...
research
01/06/2021

Predicting Illness for a Sustainable Dairy Agriculture: Predicting and Explaining the Onset of Mastitis in Dairy Cows

Mastitis is a billion dollar health problem for the modern dairy industr...
research
07/23/2019

Interpretable and Steerable Sequence Learning via Prototypes

One of the major challenges in machine learning nowadays is to provide p...
research
10/26/2021

Provably Robust Model-Centric Explanations for Critical Decision-Making

We recommend using a model-centric, Boolean Satisfiability (SAT) formali...

Please sign up or login with your details

Forgot password? Click here to reset