How to Explain Neural Networks: A perspective of data space division

05/17/2021
by   Hangcheng Dong, et al.
0

Interpretability of intelligent algorithms represented by deep learning has been yet an open problem. We discuss the shortcomings of the existing explainable method based on the two attributes of explanation, which are called completeness and explicitness. Furthermore, we point out that a model that completely relies on feed-forward mapping is extremely easy to cause inexplicability because it is hard to quantify the relationship between this mapping and the final model. Based on the perspective of the data space division, the principle of complete local interpretable model-agnostic explanations (CLIMEP) is proposed in this paper. To study the classification problems, we further discussed the equivalence of the CLIMEP and the decision boundary. As a matter of fact, it is also difficult to implementation of CLIMEP. To tackle the challenge, motivated by the fact that a fully-connected neural network (FCNN) with piece-wise linear activation functions (PWLs) can partition the input space into several linear regions, we extend this result to arbitrary FCNNs by the strategy of linearizing the activation functions. Applying this technique to solving classification problems, it is the first time that the complete decision boundary of FCNNs has been able to be obtained. Finally, we propose the DecisionNet (DNet), which divides the input space by the hyper-planes of the decision boundary. Hence, each linear interval of the DNet merely contains samples of the same label. Experiments show that the surprising model compression efficiency of the DNet with an arbitrary controlled precision.

READ FULL TEXT

page 1

page 6

research
10/11/2022

Neural Networks are Decision Trees

In this manuscript, we show that any neural network having piece-wise li...
research
06/01/2019

Evolution of Novel Activation Functions in Neural Network Training with Applications to Classification of Exoplanets

We present analytical exploration of novel activation functions as conse...
research
01/25/2019

When Can Neural Networks Learn Connected Decision Regions?

Previous work has questioned the conditions under which the decision reg...
research
10/17/2020

Squashing activation functions in benchmark tests: towards eXplainable Artificial Intelligence using continuous-valued logic

Over the past few years, deep neural networks have shown excellent resul...
research
02/12/2020

Fast Geometric Projections for Local Robustness Certification

Local robustness ensures that a model classifies all inputs within an ϵ-...
research
05/13/2022

Uninorm-like parametric activation functions for human-understandable neural models

We present a deep learning model for finding human-understandable connec...
research
11/07/2018

YASENN: Explaining Neural Networks via Partitioning Activation Sequences

We introduce a novel approach to feed-forward neural network interpretat...

Please sign up or login with your details

Forgot password? Click here to reset