"The Human Body is a Black Box": Supporting Clinical Decision-Making with Deep Learning

11/19/2019
by   Mark Sendak, et al.
0

Machine learning technologies are increasingly developed for use in healthcare. While research communities have focused on creating state-of-the-art models, there has been less focus on real world implementation and the associated challenges to accuracy, fairness, accountability, and transparency that come from actual, situated use. Serious questions remain under examined regarding how to ethically build models, interpret and explain model output, recognize and account for biases, and minimize disruptions to professional expertise and work cultures. We address this gap in the literature and provide a detailed case study covering the development, implementation, and evaluation of Sepsis Watch, a machine learning-driven tool that assists hospital clinicians in the early diagnosis and treatment of sepsis. We, the team that developed and evaluated the tool, discuss our conceptualization of the tool not as a model deployed in the world but instead as a socio-technical system requiring integration into existing social and professional contexts. Rather than focusing on model interpretability to ensure a fair and accountable machine learning, we point toward four key values and practices that should be considered when developing machine learning to support clinical decision-making: rigorously define the problem in context, build relationships with stakeholders, respect professional discretion, and create ongoing feedback loops with stakeholders. Our work has significant implications for future research regarding mechanisms of institutional accountability and considerations for designing machine learning systems. Our work underscores the limits of model interpretability as a solution to ensure transparency, accuracy, and accountability in practice. Instead, our work demonstrates other means and goals to achieve FATML values in design and in practice.

READ FULL TEXT
research
08/04/2021

VBridge: Connecting the Dots Between Features and Data to Explain Healthcare Models

Machine learning (ML) is increasingly applied to Electronic Health Recor...
research
07/02/2018

A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics

Machine learning (ML) is increasingly deployed in real world contexts, s...
research
04/23/2020

Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs

As the use of machine learning (ML) models in product development and da...
research
07/31/2019

Local Interpretation Methods to Machine Learning Using the Domain of the Feature Space

As machine learning becomes an important part of many real world applica...
research
11/15/2021

Rationale production to support clinical decision-making

The development of neural networks for clinical artificial intelligence ...
research
11/16/2018

Machine Decisions and Human Consequences

As we increasingly delegate decision-making to algorithms, whether direc...

Please sign up or login with your details

Forgot password? Click here to reset