Neural Networks are Decision Trees

10/11/2022
by   Caglar Aytekin, et al.
0

In this manuscript, we show that any neural network having piece-wise linear activation functions can be represented as a decision tree. The representation is equivalence and not an approximation, thus keeping the accuracy of the neural network exactly as is. This equivalence shows that neural networks are indeed interpretable by design and makes the black-box understanding obsolete. We share equivalent trees of some neural networks and show that besides providing interpretability, tree representation can also achieve some computational advantages. The analysis holds both for fully connected and convolutional networks, which may or may not also include skip connections and/or normalizations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/17/2019

Gradient Boosted Decision Tree Neural Network

In this paper we propose a method to build a neural network that is simi...
research
09/30/2019

Locally Constant Networks

We show how neural models can be used to realize piece-wise constant fun...
research
07/23/2020

The Representation Theory of Neural Networks

In this work, we show that neural networks can be represented via the ma...
research
01/19/2023

Towards Rigorous Understanding of Neural Networks via Semantics-preserving Transformations

In this paper we present an algebraic approach to the precise and global...
research
05/17/2021

How to Explain Neural Networks: A perspective of data space division

Interpretability of intelligent algorithms represented by deep learning ...
research
02/11/2020

Neural Rule Ensembles: Encoding Sparse Feature Interactions into Neural Networks

Artificial Neural Networks form the basis of very powerful learning meth...
research
04/07/2021

Sparse Oblique Decision Trees: A Tool to Understand and Manipulate Neural Net Features

The widespread deployment of deep nets in practical applications has lea...

Please sign up or login with your details

Forgot password? Click here to reset