From unbiased MDI Feature Importance to Explainable AI for Trees

03/26/2020
by   Markus Loecher, et al.
0

We attempt to give a unifying view of the various recent attempts to (i) improve the interpretability of tree-based models and (ii) debias the the default variable-importance measure in random Forests, Gini importance. In particular, we demonstrate a common thread among the out-of-bag based bias correction methods and their connection to local explanation for trees. In addition, we point out a bias caused by the inclusion of inbag data in the newly developed explainable AI for trees algorithms.

READ FULL TEXT

page 8

page 10

research
03/04/2020

Unbiased variable importance for random forests

The default variable-importance measure in random Forests, Gini importan...
research
06/26/2019

A Debiased MDI Feature Importance Measure for Random Forests

Tree ensembles such as Random Forests have achieved impressive empirical...
research
11/15/2007

Variable importance in binary regression trees and forests

We characterize and study variable importance (VIMP) and pairwise variab...
research
03/12/2019

Unbiased Measurement of Feature Importance in Tree-Based Methods

We propose a modification that corrects for split-improvement variable i...
research
08/13/2021

Data-driven advice for interpreting local and global model predictions in bioinformatics problems

Tree-based algorithms such as random forests and gradient boosted trees ...
research
05/18/2023

Unbiased Gradient Boosting Decision Tree with Unbiased Feature Importance

Gradient Boosting Decision Tree (GBDT) has achieved remarkable success i...

Please sign up or login with your details

Forgot password? Click here to reset