DeepAI AI Chat
Log In Sign Up

Regional Tree Regularization for Interpretability in Black Box Models

by   Mike Wu, et al.
Universität Basel
Tufts University
Università di Siena
Harvard University
Stanford University

The lack of interpretability remains a barrier to the adoption of deep neural networks. Recently, tree regularization has been proposed to encourage deep neural networks to resemble compact, axis-aligned decision trees without significant compromises in accuracy. However, it may be unreasonable to expect that a single tree can predict well across all possible inputs. In this work, we propose regional tree regularization, which encourages a deep model to be well-approximated by several separate decision trees specific to predefined regions of the input space. Practitioners can define regions based on domain knowledge of contexts where different decision-making logic is needed. Across many datasets, our approach delivers more accurate predictions than simply training separate decision trees for each region, while producing simpler explanations than other neural net regularization schemes without sacrificing predictive power. Two healthcare case studies in critical care and HIV demonstrate how experts can improve understanding of deep models via our approach.


page 6

page 7


Optimizing for Interpretability in Deep Neural Networks with Tree Regularization

Deep models have advanced prediction in many domains, but their lack of ...

Beyond Sparsity: Tree Regularization of Deep Models for Interpretability

The lack of interpretability remains a key barrier to the adoption of de...

Contextual Care Protocol using Neural Networks and Decision Trees

A contextual care protocol is used by a medical practitioner for patient...

Sparse Oblique Decision Trees: A Tool to Understand and Manipulate Neural Net Features

The widespread deployment of deep nets in practical applications has lea...

Improving the Interpretability of Deep Neural Networks with Knowledge Distillation

Deep Neural Networks have achieved huge success at a wide spectrum of ap...

An Ontology-based Approach to Explaining Artificial Neural Networks

Explainability in Artificial Intelligence has been revived as a topic of...

Quality Diversity Evolutionary Learning of Decision Trees

Addressing the need for explainable Machine Learning has emerged as one ...