DeepAI AI Chat
Log In Sign Up

XAI in the context of Predictive Process Monitoring: Too much to Reveal

by   Ghada Elkhawaga, et al.
German University in Cairo
Universität Ulm

Predictive Process Monitoring (PPM) has been integrated into process mining tools as a value-adding task. PPM provides useful predictions on the further execution of the running business processes. To this end, machine learning-based techniques are widely employed in the context of PPM. In order to gain stakeholders trust and advocacy of PPM predictions, eXplainable Artificial Intelligence (XAI) methods are employed in order to compensate for the lack of transparency of most efficient predictive models. Even when employed under the same settings regarding data, preprocessing techniques, and ML models, explanations generated by multiple XAI methods differ profoundly. A comparison is missing to distinguish XAI characteristics or underlying conditions that are deterministic to an explanation. To address this gap, we provide a framework to enable studying the effect of different PPM-related settings and ML model-related choices on characteristics and expressiveness of resulting explanations. In addition, we compare how different explainability methods characteristics can shape resulting explanations and enable reflecting underlying model reasoning process


Explainability of Predictive Process Monitoring Results: Can You See My Data Issues?

Predictive business process monitoring (PPM) has been around for several...

Explainable Predictive Process Monitoring: A User Evaluation

Explainability is motivated by the lack of transparency of black-box Mac...

A Turing Test for Transparency

A central goal of explainable artificial intelligence (XAI) is to improv...

Machine Learning in Transaction Monitoring: The Prospect of xAI

Banks hold a societal responsibility and regulatory requirements to miti...