Counterfactual Explanations for Predictive Business Process Monitoring

02/24/2022
by   Tsung-Hao Huang, et al.
0

Predictive business process monitoring increasingly leverages sophisticated prediction models. Although sophisticated models achieve consistently higher prediction accuracy than simple models, one major drawback is their lack of interpretability, which limits their adoption in practice. We thus see growing interest in explainable predictive business process monitoring, which aims to increase the interpretability of prediction models. Existing solutions focus on giving factual explanations.While factual explanations can be helpful, humans typically do not ask why a particular prediction was made, but rather why it was made instead of another prediction, i.e., humans are interested in counterfactual explanations. While research in explainable AI produced several promising techniques to generate counterfactual explanations, directly applying them to predictive process monitoring may deliver unrealistic explanations, because they ignore the underlying process constraints. We propose LORELEY, a counterfactual explanation technique for predictive process monitoring, which extends LORE, a recent explainable AI technique. We impose control flow constraints to the explanation generation process to ensure realistic counterfactual explanations. Moreover, we extend LORE to enable explaining multi-class classification models. Experimental results using a real, public dataset indicate that LORELEY can approximate the prediction models with an average fidelity of 97.69% and generate realistic counterfactual explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/04/2020

Explainable Predictive Process Monitoring

Predictive Business Process Monitoring is becoming an essential aid for ...
research
02/26/2021

If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques

In recent years, there has been an explosion of AI research on counterfa...
research
05/26/2020

Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI)

Recently, a groundswell of research has identified the use of counterfac...
research
04/16/2021

MEG: Generating Molecular Counterfactual Explanations for Deep Graph Networks

Explainable AI (XAI) is a research area whose objective is to increase t...
research
02/15/2022

Explaining Reject Options of Learning Vector Quantization Classifiers

While machine learning models are usually assumed to always output a pre...
research
04/04/2019

A Categorisation of Post-hoc Explanations for Predictive Models

The ubiquity of machine learning based predictive models in modern socie...
research
05/11/2022

Exploring Local Explanations of Nonlinear Models Using Animated Linear Projections

The increased predictive power of nonlinear models comes at the cost of ...

Please sign up or login with your details

Forgot password? Click here to reset