The Role of Explainability in Assuring Safety of Machine Learning in Healthcare

09/01/2021
by   Yan Jia, et al.
0

Established approaches to assuring safety-critical systems and software are difficult to apply to systems employing machine learning (ML). In many cases, ML is used on ill-defined problems, e.g. optimising sepsis treatment, where there is no clear, pre-defined specification against which to assess validity. This problem is exacerbated by the "opaque" nature of ML where the learnt model is not amenable to human scrutiny. Explainable AI methods have been proposed to tackle this issue by producing human-interpretable representations of ML models which can help users to gain confidence and build trust in the ML system. However, there is not much work explicitly investigating the role of explainability for safety assurance in the context of ML development. This paper identifies ways in which explainable AI methods can contribute to safety assurance of ML-based systems. It then uses a concrete ML-based clinical decision support system, concerning weaning of patients from mechanical ventilation, to demonstrate how explainable AI methods can be employed to produce evidence to support safety assurance. The results are also represented in a safety argument to show where, and in what way, explainable AI methods can contribute to a safety case. Overall, we conclude that explainable AI methods have a valuable role in safety assurance of ML-based systems in healthcare but that they are not sufficient in themselves to assure safety.

READ FULL TEXT
research
07/21/2021

Audit, Don't Explain – Recommendations Based on a Socio-Technical Understanding of ML-Based Systems

In this position paper, I provide a socio-technical perspective on machi...
research
05/13/2019

What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use

Translating machine learning (ML) models effectively to clinical practic...
research
12/21/2021

Toward Explainable AI for Regression Models

In addition to the impressive predictive power of machine learning (ML) ...
research
07/11/2022

From Correlation to Causation: Formalizing Interpretable Machine Learning as a Statistical Process

Explainable AI (XAI) is a necessity in safety-critical systems such as i...
research
01/14/2022

A causal model of safety assurance for machine learning

This paper proposes a framework based on a causal model of safety upon w...
research
09/14/2020

The Role of Individual User Differences in Interpretable and Explainable Machine Learning Systems

There is increased interest in assisting non-expert audiences to effecti...

Please sign up or login with your details

Forgot password? Click here to reset