Global Explanation of Tree-Ensembles Models Based on Item Response Theory

10/18/2022
by   José Ribeiro, et al.
0

Explainable Artificial Intelligence - XAI is aimed at studying and developing techniques to explain black box models, that is, models that provide limited self-explanation of their predictions. In recent years, XAI researchers have been formalizing proposals and developing new measures to explain how these models make specific predictions. In previous studies, evidence has been found on how model (dataset and algorithm) complexity affects global explanations generated by XAI measures Ciu, Dalex, Eli5, Lofo, Shap and Skater, suggesting that there is room for the development of a new XAI measure that builds on the complexity of the model. Thus, this research proposes a measure called Explainable based on Item Response Theory - eXirt, which is capable of explaining tree-ensemble models by using the properties of Item Response Theory (IRT). For this purpose, a benchmark was created using 40 different datasets and 2 different algorithms (Random Forest and Gradient Boosting), thus generating 6 different explainability ranks using known XAI measures along with 1 data purity rank and 1 rank of the measure eXirt, amounting to 8 global ranks for each model, i.e., 640 ranks altogether. The results show that eXirt displayed different ranks than those of the other measures, which demonstrates that the advocated methodology generates global explanations of tree-ensemble models that have not yet been explored, either for the more difficult models to explain or even the easier ones.

READ FULL TEXT

page 10

page 17

research
07/06/2021

Does Dataset Complexity Matters for Model Explainers?

Strategies based on Explainable Artificial Intelligence - XAI have emerg...
research
10/04/2022

Explanation-by-Example Based on Item Response Theory

Intelligent systems that use Machine Learning classification algorithms ...
research
01/30/2023

Explaining Dataset Changes for Semantic Data Versioning with Explain-Da-V (Technical Report)

In multi-user environments in which data science and analysis is collabo...
research
01/12/2022

SLISEMAP: Explainable Dimensionality Reduction

Existing explanation methods for black-box supervised learning models ge...
research
09/07/2020

Representativity and Consistency Measures for Deep Neural Network Explanations

The adoption of machine learning in critical contexts requires a reliabl...
research
01/05/2023

Instance-based Explanations for Gradient Boosting Machine Predictions with AXIL Weights

We show that regression predictions from linear and tree-based models ca...
research
09/11/2020

TREX: Tree-Ensemble Representer-Point Explanations

How can we identify the training examples that contribute most to the pr...

Please sign up or login with your details

Forgot password? Click here to reset