Exploring layerwise decision making in DNNs

02/01/2022
by   Coenraad Mouton, et al.
0

While deep neural networks (DNNs) have become a standard architecture for many machine learning tasks, their internal decision-making process and general interpretability is still poorly understood. Conversely, common decision trees are easily interpretable and theoretically well understood. We show that by encoding the discrete sample activation values of nodes as a binary representation, we are able to extract a decision tree explaining the classification procedure of each layer in a ReLU-activated multilayer perceptron (MLP). We then combine these decision trees with existing feature attribution techniques in order to produce an interpretation of each layer of a model. Finally, we provide an analysis of the generated interpretations, the behaviour of the binary encodings and how these relate to sample groupings created during the training process of the neural network.

READ FULL TEXT

page 6

page 11

research
03/10/2020

Towards Interpretable Deep Neural Networks: An Exact Transformation to Multi-Class Multivariate Decision Trees

Deep neural networks (DNNs) are commonly labelled as black-boxes lacking...
research
03/10/2020

An Exact Transformation from Deep Neural Networks to Multi-Class Multivariate Decision Trees

Deep neural networks (DNNs) are commonly labelled as black-boxes lacking...
research
11/07/2018

YASENN: Explaining Neural Networks via Partitioning Activation Sequences

We introduce a novel approach to feed-forward neural network interpretat...
research
02/23/2017

Neural Decision Trees

In this paper we propose a synergistic melting of neural networks and de...
research
09/13/2019

Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data

Nowadays, deep neural networks (DNNs) have become the main instrument fo...
research
10/28/2020

Designing Interpretable Approximations to Deep Reinforcement Learning with Soft Decision Trees

In an ever expanding set of research and application areas, deep neural ...
research
03/29/2021

One Network Fits All? Modular versus Monolithic Task Formulations in Neural Networks

Can deep learning solve multiple tasks simultaneously, even when they ar...

Please sign up or login with your details

Forgot password? Click here to reset