Black Box Model Explanations and the Human Interpretability Expectations – An Analysis in the Context of Homicide Prediction

10/19/2022
by   José Ribeiro, et al.
0

Strategies based on Explainable Artificial Intelligence - XAI have promoted better human interpretability of the results of black box machine learning models. The XAI measures being currently used (Ciu, Dalex, Eli5, Lofo, Shap, and Skater) provide various forms of explanations, including global rankings of relevance of attributes. Current research points to the need for further studies on how these explanations meet the Interpretability Expectations of human experts and how they can be used to make the model even more transparent while taking into account specific complexities of the model and dataset being analyzed, as well as important human factors of sensitive real-world contexts/problems. Intending to shed light on the explanations generated by XAI measures and their interpretabilities, this research addresses a real-world classification problem related to homicide prediction, duly endorsed by the scientific community, replicated its proposed black box model and used 6 different XAI measures to generate explanations and 6 different human experts to generate what this research referred to as Interpretability Expectations - IE. The results were computed by means of comparative analysis and identification of relationships among all the attribute ranks produced, and  49 and human experts,  41 human experts. The results allow for answering: "Do the different XAI measures generate similar explanations for the proposed problem?", "Are the interpretability expectations generated among different human experts similar?", "Do the explanations generated by XAI measures meet the interpretability expectations of human experts?" and "Can Interpretability Explanations and Expectations work together?", all of which concerning the context of homicide prediction.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset