Black Box Model Explanations and the Human Interpretability Expectations – An Analysis in the Context of Homicide Prediction

10/19/2022
by   José Ribeiro, et al.
0

Strategies based on Explainable Artificial Intelligence - XAI have promoted better human interpretability of the results of black box machine learning models. The XAI measures being currently used (Ciu, Dalex, Eli5, Lofo, Shap, and Skater) provide various forms of explanations, including global rankings of relevance of attributes. Current research points to the need for further studies on how these explanations meet the Interpretability Expectations of human experts and how they can be used to make the model even more transparent while taking into account specific complexities of the model and dataset being analyzed, as well as important human factors of sensitive real-world contexts/problems. Intending to shed light on the explanations generated by XAI measures and their interpretabilities, this research addresses a real-world classification problem related to homicide prediction, duly endorsed by the scientific community, replicated its proposed black box model and used 6 different XAI measures to generate explanations and 6 different human experts to generate what this research referred to as Interpretability Expectations - IE. The results were computed by means of comparative analysis and identification of relationships among all the attribute ranks produced, and  49 and human experts,  41 human experts. The results allow for answering: "Do the different XAI measures generate similar explanations for the proposed problem?", "Are the interpretability expectations generated among different human experts similar?", "Do the explanations generated by XAI measures meet the interpretability expectations of human experts?" and "Can Interpretability Explanations and Expectations work together?", all of which concerning the context of homicide prediction.

READ FULL TEXT
research
12/23/2021

AcME – Accelerated Model-agnostic Explanations: Fast Whitening of the Machine-Learning Black Box

In the context of human-in-the-loop Machine Learning applications, like ...
research
06/24/2022

Analyzing the Effects of Classifier Lipschitzness on Explainers

Machine learning methods are getting increasingly better at making predi...
research
06/01/2022

Assessing the trade-off between prediction accuracy and interpretability for topic modeling on energetic materials corpora

As the amount and variety of energetics research increases, machine awar...
research
01/18/2021

Generative Counterfactuals for Neural Networks via Attribute-Informed Perturbation

With the wide use of deep neural networks (DNN), model interpretability ...
research
05/30/2022

Fooling SHAP with Stealthily Biased Sampling

SHAP explanations aim at identifying which features contribute the most ...
research
07/07/2019

Case-Based Reasoning for Assisting Domain Experts in Processing Fraud Alerts of Black-Box Machine Learning Models

In many contexts, it can be useful for domain experts to understand to w...
research
07/16/2019

Mediation Challenges and Socio-Technical Gaps for Explainable Deep Learning Applications

The presumed data owners' right to explanations brought about by the Gen...

Please sign up or login with your details

Forgot password? Click here to reset