Explaining Predictions from Tree-based Boosting Ensembles

07/04/2019
by   Ana Lucic, et al.
2

Understanding how "black-box" models arrive at their predictions has sparked significant interest from both within and outside the AI community. Our work focuses on doing this by generating local explanations about individual predictions for tree-based ensembles, specifically Gradient Boosting Decision Trees (GBDTs). Given a correctly predicted instance in the training set, we wish to generate a counterfactual explanation for this instance, that is, the minimal perturbation of this instance such that the prediction flips to the opposite class. Most existing methods for counterfactual explanations are (1) model-agnostic, so they do not take into account the structure of the original model, and/or (2) involve building a surrogate model on top of the original model, which is not guaranteed to represent the original model accurately. There exists a method specifically for random forests; we wish to extend this method for GBDTs. This involves accounting for (1) the sequential dependency between trees and (2) training on the negative gradients instead of the original labels.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/27/2019

Actionable Interpretability through Optimizable Counterfactual Explanations for Tree Ensembles

Counterfactual explanations help users understand why machine learned mo...
research
01/05/2023

Instance-based Explanations for Gradient Boosting Machine Predictions with AXIL Weights

We show that regression predictions from linear and tree-based models ca...
research
09/11/2020

TREX: Tree-Ensemble Representer-Point Explanations

How can we identify the training examples that contribute most to the pr...
research
09/14/2021

Behavior of k-NN as an Instance-Based Explanation Method

Adoption of DL models in critical areas has led to an escalating demand ...
research
07/03/2019

Interpretable Counterfactual Explanations Guided by Prototypes

We propose a fast, model agnostic method for finding interpretable count...
research
09/30/2021

XPROAX-Local explanations for text classification with progressive neighborhood approximation

The importance of the neighborhood for training a local surrogate model ...
research
11/20/2019

LionForests: Local Interpretation of Random Forests through Path Selection

Towards a future where machine learning systems will integrate into ever...

Please sign up or login with your details

Forgot password? Click here to reset