Interpretable Summaries of Black Box Incident Triaging with Subgroup Discovery

08/06/2021
by   Youcef Remil, et al.
0

The need of predictive maintenance comes with an increasing number of incidents reported by monitoring systems and equipment/software users. In the front line, on-call engineers (OCEs) have to quickly assess the degree of severity of an incident and decide which service to contact for corrective actions. To automate these decisions, several predictive models have been proposed, but the most efficient models are opaque (say, black box), strongly limiting their adoption. In this paper, we propose an efficient black box model based on 170K incidents reported to our company over the last 7 years and emphasize on the need of automating triage when incidents are massively reported on thousands of servers running our product, an ERP. Recent developments in eXplainable Artificial Intelligence (XAI) help in providing global explanations to the model, but also, and most importantly, with local explanations for each model prediction/outcome. Sadly, providing a human with an explanation for each outcome is not conceivable when dealing with an important number of daily predictions. To address this problem, we propose an original data-mining method rooted in Subgroup Discovery, a pattern mining technique with the natural ability to group objects that share similar explanations of their black box predictions and provide a description for each group. We evaluate this approach and present our preliminary results which give us good hope towards an effective OCE's adoption. We believe that this approach provides a new way to address the problem of model agnostic outcome explanation.

READ FULL TEXT

page 1

page 9

research
05/28/2018

Local Rule-Based Explanations of Black Box Decision Systems

The recent years have witnessed the rise of accurate but obscure decisio...
research
12/08/2022

XRand: Differentially Private Defense against Explanation-Guided Attacks

Recent development in the field of explainable artificial intelligence (...
research
06/01/2021

To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods

The main objective of eXplainable Artificial Intelligence (XAI) is to pr...
research
08/14/2023

Explaining Black-Box Models through Counterfactuals

We present CounterfactualExplanations.jl: a package for generating Count...
research
06/13/2018

Polynomial Regression As an Alternative to Neural Nets

Despite the success of neural networks (NNs), there is still a concern a...
research
05/30/2020

Explanations of Black-Box Model Predictions by Contextual Importance and Utility

The significant advances in autonomous systems together with an immensel...
research
09/18/2017

Human Understandable Explanation Extraction for Black-box Classification Models Based on Matrix Factorization

In recent years, a number of artificial intelligent services have been d...

Please sign up or login with your details

Forgot password? Click here to reset