From unbiased MDI Feature Importance to Explainable AI for Trees

03/26/2020
by   Markus Loecher, et al.
0

We attempt to give a unifying view of the various recent attempts to (i) improve the interpretability of tree-based models and (ii) debias the the default variable-importance measure in random Forests, Gini importance. In particular, we demonstrate a common thread among the out-of-bag based bias correction methods and their connection to local explanation for trees. In addition, we point out a bias caused by the inclusion of inbag data in the newly developed explainable AI for trees algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset