What is Interpretable? Using Machine Learning to Design Interpretable Decision-Support Systems

11/27/2018
by   Owen Lahav, et al.
0

Recent efforts in Machine Learning (ML) interpretability have focused on creating methods for explaining black-box ML models. However, these methods rely on the assumption that simple approximations, such as linear models or decision-trees, are inherently human-interpretable, which has not been empirically tested. Additionally, past efforts have focused exclusively on comprehension, neglecting to explore the trust component necessary to convince non-technical experts, such as clinicians, to utilize ML models in practice. In this paper, we posit that reinforcement learning (RL) can be used to learn what is interpretable to different users and, consequently, build their trust in ML models. To validate this idea, we first train a neural network to provide risk assessments for heart failure patients. We then design a RL-based clinical decision-support system (DSS) around the neural network model, which can learn from its interactions with users. We conduct an experiment involving a diverse set of clinicians from multiple institutions in three different countries. Our results demonstrate that ML experts cannot accurately predict which system outputs will maximize clinicians' confidence in the underlying neural network model, and suggest additional findings that have broad implications to the future of research into ML interpretability and the use of ML in medicine.

READ FULL TEXT

page 1

page 2

page 3

research
11/24/2022

ML Interpretability: Simple Isn't Easy

The interpretability of ML models is important, but it is not clear what...
research
09/19/2022

TimberTrek: Exploring and Curating Sparse Decision Trees with Interactive Visualization

Given thousands of equally accurate machine learning (ML) models, how ca...
research
07/14/2020

Machine Learning for Offensive Security: Sandbox Classification Using Decision Trees and Artificial Neural Networks

The merits of machine learning in information security have primarily fo...
research
09/24/2021

Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability

During a research project in which we developed a machine learning (ML) ...
research
03/20/2021

Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges

Interpretability in machine learning (ML) is crucial for high stakes dec...
research
02/25/2021

On Interpretability and Similarity in Concept-Based Machine Learning

Machine Learning (ML) provides important techniques for classification a...
research
04/16/2023

Assisting clinical practice with fuzzy probabilistic decision trees

The need for fully human-understandable models is increasingly being rec...

Please sign up or login with your details

Forgot password? Click here to reset