GLIME: A new graphical methodology for interpretable model-agnostic explanations

07/21/2021
by   Zoumpolia Dikopoulou, et al.
0

Explainable artificial intelligence (XAI) is an emerging new domain in which a set of processes and tools allow humans to better comprehend the decisions generated by black box models. However, most of the available XAI tools are often limited to simple explanations mainly quantifying the impact of individual features to the models' output. Therefore, human users are not able to understand how the features are related to each other to make predictions, whereas the inner workings of the trained models remain hidden. This paper contributes to the development of a novel graphical explainability tool that not only indicates the significant features of the model but also reveals the conditional relationships between features and the inference capturing both the direct and indirect impact of features to the models' decision. The proposed XAI methodology, termed as gLIME, provides graphical model-agnostic explanations either at the global (for the entire dataset) or the local scale (for specific data points). It relies on a combination of local interpretable model-agnostic explanations (LIME) with graphical least absolute shrinkage and selection operator (GLASSO) producing undirected Gaussian graphical models. Regularization is adopted to shrink small partial correlation coefficients to zero providing sparser and more interpretable graphical explanations. Two well-known classification datasets (BIOPSY and OAI) were selected to confirm the superiority of gLIME over LIME in terms of both robustness and consistency over multiple permutations. Specifically, gLIME accomplished increased stability over the two datasets with respect to features' importance (76 compared to 52 extend the functionality of the current state-of-the-art in XAI by providing informative graphically given explanations that could unlock black boxes.

READ FULL TEXT
research
10/10/2022

Local Interpretable Model Agnostic Shap Explanations for machine learning models

With the advancement of technology for artificial intelligence (AI) base...
research
10/29/2019

bLIMEy: Surrogate Prediction Explanations Beyond LIME

Surrogate explainers of black-box machine learning predictions are of pa...
research
07/06/2021

Does Dataset Complexity Matters for Model Explainers?

Strategies based on Explainable Artificial Intelligence - XAI have emerg...
research
03/22/2023

TsSHAP: Robust model agnostic feature-based explainability for time series forecasting

A trustworthy machine learning model should be accurate as well as expla...
research
07/07/2021

Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern Classification

Machine learning solutions for pattern classification problems are nowad...
research
05/14/2020

Evolved Explainable Classifications for Lymph Node Metastases

A novel evolutionary approach for Explainable Artificial Intelligence is...

Please sign up or login with your details

Forgot password? Click here to reset