Improving the Interpretability of Deep Neural Networks with Knowledge Distillation

12/28/2018
by   Xuan Liu, et al.
0

Deep Neural Networks have achieved huge success at a wide spectrum of applications from language modeling, computer vision to speech recognition. However, nowadays, good performance alone is not sufficient to satisfy the needs of practical deployment where interpretability is demanded for cases involving ethics and mission critical applications. The complex models of Deep Neural Networks make it hard to understand and reason the predictions, which hinders its further progress. To tackle this problem, we apply the Knowledge Distillation technique to distill Deep Neural Networks into decision trees in order to attain good performance and interpretability simultaneously. We formulate the problem at hand as a multi-output regression problem and the experiments demonstrate that the student model achieves significantly better accuracy performance (about 1% to 5%) than vanilla decision trees at the same level of tree depth. The experiments are implemented on the TensorFlow platform to make it scalable to big datasets. To the best of our knowledge, we are the first to distill Deep Neural Networks into vanilla decision trees on multi-class datasets.

READ FULL TEXT
research
03/14/2019

Rectified Decision Trees: Towards Interpretability, Compression and Empirical Soundness

How to obtain a model with good interpretability and performance has alw...
research
08/13/2019

Regional Tree Regularization for Interpretability in Black Box Models

The lack of interpretability remains a barrier to the adoption of deep n...
research
08/21/2020

Rectified Decision Trees: Exploring the Landscape of Interpretable and Effective Machine Learning

Interpretability and effectiveness are two essential and indispensable r...
research
07/23/2023

NCART: Neural Classification and Regression Tree for Tabular Data

Deep learning models have become popular in the analysis of tabular data...
research
02/08/2023

Decision trees compensate for model misspecification

The best-performing models in ML are not interpretable. If we can explai...
research
08/14/2019

Optimizing for Interpretability in Deep Neural Networks with Tree Regularization

Deep models have advanced prediction in many domains, but their lack of ...
research
03/13/2021

Robust Model Compression Using Deep Hypotheses

Machine Learning models should ideally be compact and robust. Compactnes...

Please sign up or login with your details

Forgot password? Click here to reset