Finding Minimum-Cost Explanations for Predictions made by Tree Ensembles

03/16/2023
by   John Törnblom, et al.
0

The ability to explain why a machine learning model arrives at a particular prediction is crucial when used as decision support by human operators of critical systems. The provided explanations must be provably correct, and preferably without redundant information, called minimal explanations. In this paper, we aim at finding explanations for predictions made by tree ensembles that are not only minimal, but also minimum with respect to a cost function. To this end, we first present a highly efficient oracle that can determine the correctness of explanations, surpassing the runtime performance of current state-of-the-art alternatives by several orders of magnitude when computing minimal explanations. Secondly, we adapt an algorithm called MARCO from related works (calling it m-MARCO) for the purpose of computing a single minimum explanation per prediction, and demonstrate an overall speedup factor of two compared to the MARCO algorithm which enumerates all minimal explanations. Finally, we study the obtained explanations from a range of use cases, leading to further insights of their characteristics. In particular, we observe that in several cases, there are more than 100,000 minimal explanations to choose from for a single prediction. In these cases, we see that only a small portion of the minimal explanations are also minimum, and that the minimum explanations are significantly less verbose, hence motivating the aim of this work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/16/2022

Computing Abductive Explanations for Boosted Trees

Boosted trees is a dominant ML model, exhibiting high accuracy. However,...
research
12/21/2020

On Relating 'Why?' and 'Why Not?' Explanations

Explanations of Machine Learning (ML) models often address a 'Why?' ques...
research
07/13/2018

Model Reconstruction from Model Explanations

We show through theory and experiment that gradient-based explanations o...
research
09/23/2020

Explaining Chemical Toxicity using Missing Features

Chemical toxicity prediction using machine learning is important in drug...
research
08/25/2020

Looking deeper into LIME

Interpretability of machine learning algorithm is a pressing need. Numer...
research
12/12/2022

PERFEX: Classifier Performance Explanations for Trustworthy AI Systems

Explainability of a classification model is crucial when deployed in rea...
research
10/08/2020

Memory-Limited Model-Based Diagnosis

Various model-based diagnosis scenarios require the computation of most ...

Please sign up or login with your details

Forgot password? Click here to reset