How to Explain Neural Networks: A perspective of data space division

by   Hangcheng Dong, et al.

Interpretability of intelligent algorithms represented by deep learning has been yet an open problem. We discuss the shortcomings of the existing explainable method based on the two attributes of explanation, which are called completeness and explicitness. Furthermore, we point out that a model that completely relies on feed-forward mapping is extremely easy to cause inexplicability because it is hard to quantify the relationship between this mapping and the final model. Based on the perspective of the data space division, the principle of complete local interpretable model-agnostic explanations (CLIMEP) is proposed in this paper. To study the classification problems, we further discussed the equivalence of the CLIMEP and the decision boundary. As a matter of fact, it is also difficult to implementation of CLIMEP. To tackle the challenge, motivated by the fact that a fully-connected neural network (FCNN) with piece-wise linear activation functions (PWLs) can partition the input space into several linear regions, we extend this result to arbitrary FCNNs by the strategy of linearizing the activation functions. Applying this technique to solving classification problems, it is the first time that the complete decision boundary of FCNNs has been able to be obtained. Finally, we propose the DecisionNet (DNet), which divides the input space by the hyper-planes of the decision boundary. Hence, each linear interval of the DNet merely contains samples of the same label. Experiments show that the surprising model compression efficiency of the DNet with an arbitrary controlled precision.


page 1

page 6


Neural Networks are Decision Trees

In this manuscript, we show that any neural network having piece-wise li...

Evolution of Novel Activation Functions in Neural Network Training with Applications to Classification of Exoplanets

We present analytical exploration of novel activation functions as conse...

When Can Neural Networks Learn Connected Decision Regions?

Previous work has questioned the conditions under which the decision reg...

Squashing activation functions in benchmark tests: towards eXplainable Artificial Intelligence using continuous-valued logic

Over the past few years, deep neural networks have shown excellent resul...

Fast Geometric Projections for Local Robustness Certification

Local robustness ensures that a model classifies all inputs within an ϵ-...

Uninorm-like parametric activation functions for human-understandable neural models

We present a deep learning model for finding human-understandable connec...

YASENN: Explaining Neural Networks via Partitioning Activation Sequences

We introduce a novel approach to feed-forward neural network interpretat...

Please sign up or login with your details

Forgot password? Click here to reset