Rectified Decision Trees: Exploring the Landscape of Interpretable and Effective Machine Learning

08/21/2020
by   Yiming Li, et al.
33

Interpretability and effectiveness are two essential and indispensable requirements for adopting machine learning methods in reality. In this paper, we propose a knowledge distillation based decision trees extension, dubbed rectified decision trees (ReDT), to explore the possibility of fulfilling those requirements simultaneously. Specifically, we extend the splitting criteria and the ending condition of the standard decision trees, which allows training with soft labels while preserving the deterministic splitting paths. We then train the ReDT based on the soft label distilled from a well-trained teacher model through a novel jackknife-based method. Accordingly, ReDT preserves the excellent interpretable nature of the decision trees while having a relatively good performance. The effectiveness of adopting soft labels instead of hard ones is also analyzed empirically and theoretically. Surprisingly, experiments indicate that the introduction of soft labels also reduces the model size compared with the standard decision trees from the aspect of the total nodes and rules, which is an unexpected gift from the `dark knowledge' distilled from the teacher model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/14/2019

Rectified Decision Trees: Towards Interpretability, Compression and Empirical Soundness

How to obtain a model with good interpretability and performance has alw...
research
09/15/2022

MIXRTs: Toward Interpretable Multi-Agent Reinforcement Learning via Mixing Recurrent Soft Decision Trees

Multi-agent reinforcement learning (MARL) recently has achieved tremendo...
research
11/26/2022

Mixture of Decision Trees for Interpretable Machine Learning

This work introduces a novel interpretable machine learning method calle...
research
12/28/2018

Improving the Interpretability of Deep Neural Networks with Knowledge Distillation

Deep Neural Networks have achieved huge success at a wide spectrum of ap...
research
02/16/2023

Learning From Biased Soft Labels

Knowledge distillation has been widely adopted in a variety of tasks and...
research
08/22/2018

Approximation Trees: Statistical Stability in Model Distillation

This paper examines the stability of learned explanations for black-box ...
research
06/23/2022

Indecision Trees: Learning Argument-Based Reasoning under Quantified Uncertainty

Using Machine Learning systems in the real world can often be problemati...

Please sign up or login with your details

Forgot password? Click here to reset