YASENN: Explaining Neural Networks via Partitioning Activation Sequences

11/07/2018
by   Yaroslav Zharov, et al.
0

We introduce a novel approach to feed-forward neural network interpretation based on partitioning the space of sequences of neuron activations. In line with this approach, we propose a model-specific interpretation method, called YASENN. Our method inherits many advantages of model-agnostic distillation, such as an ability to focus on the particular input region and to express an explanation in terms of features different from those observed by a neural network. Moreover, examination of distillation error makes the method applicable to the problems with low tolerance to interpretation mistakes. Technically, YASENN distills the network with an ensemble of layer-wise gradient boosting decision trees and encodes the sequences of neuron activations with leaf indices. The finite number of unique codes induces a partitioning of the input space. Each partition may be described in a variety of ways, including examination of an interpretable model (e.g. a logistic regression or a decision tree) trained to discriminate between objects of those partitions. Our experiments provide an intuition behind the method and demonstrate revealed artifacts in neural network decision making.

READ FULL TEXT
research
10/17/2019

Gradient Boosted Decision Tree Neural Network

In this paper we propose a method to build a neural network that is simi...
research
02/01/2022

Exploring layerwise decision making in DNNs

While deep neural networks (DNNs) have become a standard architecture fo...
research
03/10/2020

Towards Interpretable Deep Neural Networks: An Exact Transformation to Multi-Class Multivariate Decision Trees

Deep neural networks (DNNs) are commonly labelled as black-boxes lacking...
research
11/05/2018

Lifted Proximal Operator Machines

We propose a new optimization method for training feed-forward neural ne...
research
04/08/2020

DeepStreamCE: A Streaming Approach to Concept Evolution Detection in Deep Neural Networks

Deep neural networks have experimentally demonstrated superior performan...
research
04/17/2020

Parallelization Techniques for Verifying Neural Networks

Inspired by recent successes with parallel optimization techniques for s...
research
05/17/2021

How to Explain Neural Networks: A perspective of data space division

Interpretability of intelligent algorithms represented by deep learning ...

Please sign up or login with your details

Forgot password? Click here to reset