Rectified Decision Trees: Towards Interpretability, Compression and Empirical Soundness

03/14/2019
by   Jiawang Bai, et al.
0

How to obtain a model with good interpretability and performance has always been an important research topic. In this paper, we propose rectified decision trees (ReDT), a knowledge distillation based decision trees rectification with high interpretability, small model size, and empirical soundness. Specifically, we extend the impurity calculation and the pure ending condition of the classical decision tree to propose a decision tree extension that allows the use of soft labels generated by a well-trained teacher model in training and prediction process. It is worth noting that for the acquisition of soft labels, we propose a new multiple cross-validation based method to reduce the effects of randomness and overfitting. These approaches ensure that ReDT retains excellent interpretability and even achieves fewer nodes than the decision tree in the aspect of compression while having relatively good performance. Besides, in contrast to traditional knowledge distillation, back propagation of the student model is not necessarily required in ReDT, which is an attempt of a new knowledge distillation approach. Extensive experiments are conducted, which demonstrates the superiority of ReDT in interpretability, compression, and empirical soundness.

READ FULL TEXT
research
08/21/2020

Rectified Decision Trees: Exploring the Landscape of Interpretable and Effective Machine Learning

Interpretability and effectiveness are two essential and indispensable r...
research
12/28/2018

Improving the Interpretability of Deep Neural Networks with Knowledge Distillation

Deep Neural Networks have achieved huge success at a wide spectrum of ap...
research
06/09/2022

Distillation Decision Tree

Black-box machine learning models are criticized as lacking interpretabi...
research
06/16/2022

Explainable Models via Compression of Tree Ensembles

Ensemble models (bagging and gradient-boosting) of relational decision t...
research
02/12/2023

Efficient Fraud Detection using Deep Boosting Decision Trees

Fraud detection is to identify, monitor, and prevent potentially fraudul...
research
05/15/2018

Improving Knowledge Distillation with Supporting Adversarial Samples

Many recent works on knowledge distillation have provided ways to transf...
research
09/12/2023

Level Up: Private Non-Interactive Decision Tree Evaluation using Levelled Homomorphic Encryption

As machine learning as a service continues gaining popularity, concerns ...

Please sign up or login with your details

Forgot password? Click here to reset