Not All Features Are Equal: Feature Leveling Deep Neural Networks for Better Interpretation

05/24/2019 ∙ by Yingjing Lu, et al. ∙ Carnegie Mellon University cornell university 0

Self-explaining models are models that reveal decision making parameters in an interpretable manner so that the model reasoning process can be directly understood by human beings. General Linear Models (GLMs) are self-explaining because the model weights directly show how each feature contribute to the output value. However, deep neural networks (DNNs) are in general not self-explaining due to the non-linearity of the activation functions, complex architectures, obscure feature extraction and transformation process. In this work, we illustrate the fact that existing deep architectures are hard to interpret because each hidden layer carries a mix of high level features and low level features. As a solution, we propose a novel feature leveling architecture that isolates low level features from high level features on a per-layer basis to better utilize the GLM layer in the proposed architecture for interpretation. Experimental results show that our modified models are able to achieve competitive results comparing to main-stream architectures on standard datasets while being more self-explaining. Our implementations and configurations are publicly available for reproductions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep Neural Networks (DNNs) are viewed as back-box models because of their obscure decision making process. As a result, those models are hard to verify and are susceptible to adversarial attacks. Thus, it is important for researchers to find ways to interpret DNNs to improve their applicability.

One reason that makes deep neural networks hard to interpret is that they are able to magically extract abstract concepts through multi-layer non-linear activations and end-to-end training. From a human perspective, it is hard to understand how features are extracted from different hidden layers and what features are used for final decision making.

In response to the challenge of interpretability, two paths are taken to unbox neural networks’ decision learning process. One method is to design verifying algorithms that can be applied to existing models to back-trace their decision learning process. Another method is to design models that "explain" the decision making process automatically. The second direction is promising in that the interpretability is built-in architecturally. Thus, the verification feedback can be directly used to improve the model. Public Repo URL annonymized for review purpose-See supplementals for detailed implementation

One class of the self-explaining models borrows the interpretability of General Linear Models (GLMs) such as linear regression. GLMs are naturally interpretable in that complicated interactions of non-linear activations are not involved. The contribution of each feature to the final decision output can simply be analyzed by examining the corresponding weight parameters. Therefore, we take a step forward to investigate ways to make DNNs as similar to GLMs as possible for interpretability purpose while maintaining competitive performance.

Fortunately, a GLM model naturally exists in the last layer of most common architectures of DNNs (See supplemental for the reason that the last layer is a GLM layer). However, the GLM could only account for the output generated by the last layer and this output is not easy to interpret because it potentially contains mixed levels of features. In the following section, we use empirical results to demonstrate this mixture effect. Based on this observation, one way to naturally improve interpretation is to prevent features extracted by different layers from mixing together. Thus, we directly pass features extracted by each layer to the final GLM layer. This can further improve interpretability by leveraging the weights of the GLM layer to explain the decision making process. Motivated by this observation, we design a feature leveling network structure that can automatically separate low level features from high level features to avoid mixture effect. In other words, if the low level features extracted by the hidden layer can be readily used by the GLM layer, we should directly pass these features to the GLM rather than feeding them to the hidden layer. We also propose a feature leveling scale to measure the complexity of different sets of features’ in an unambiguous manner rather than simply using vague terms such as "low" and "high" to describe these features.

In the following sections, we will first lay out the proposed definition of feature leveling. We then will illustrate how different levels of features reside in the same feature space. Based on the above observations, we propose feature leveling network, an architectural modification on existing models that can isolate low level features from high level features within different layers of the neural network in an unsupervised manner. In the experiment section, we will use empirical results to show that this modification can also be applied to reduce the number of layers in an architecture and thus reduce the complexity of the network. In this paper, we focus primarily on fully connected neural networks(FCNN) with ReLU activation function in the hidden layers. Our main contributions are as follows:

  • We take a step forward to quantify feature complexity for DNNs.

  • We investigate the mixture effect between features of different complexities in the hidden layers of DNNs.

  • We propose a feature leveling architecture that is able to isolate low level features from high level features in each layer to improve interpretation.

  • We further show that the proposed architecture is able to prune redundant hidden layers to reduce DNNs’ complexity with little compromise on performance.

The remaining content is organized as follows: In section 2, we first introduce our definitions of feature leveling and use a toy example to show the mixture effect of features in hidden layers. In section 3, we give a detailed account of our proposed feature leveling network that could effectively isolate different levels of features. In section 4, we provide a high level introduction to some related works that motivated our architectural design. In Section 5, we test and analyze our proposed architecture on various real world datasets and show that our architecture is able to achieve competitive performance while improving interpretability. In section 6, we show that our model is also able to automatically prune redundant hidden layers, thus reducing the complexity of DNNs.

2 Feature leveling for neural networks

The concepts of low level and high level features are often brought up within the machine learning literature. However, their definitions are vague and not precise enough for applications. Intuitively, low level features are usually "simple" concepts or patterns whereas high level features are "abstract" or "implicit" features.

Within the scope of this paper, we take a step forward to give a formal definition of feature leveling that quantizes feature complexity in an absolute scale. This concept of a features’ scale is better than simply having "low" and "high" as descriptions because it reveals an unambiguous ordering between different sets of features. We will use a toy example to demonstrate how features can have different levels and explain why separating different levels of features could improve interpretability.

2.1 A toy example

We create a toy dataset called Independent XOR(IXOR). IXOR consists of a set of uniformally distributed features

and a set of labels . The labels are assigned as:

Figure 1: Visualization of the toy IXOR dataset

In this dataset, clearly have different levels of feature. can be directly used by the GLM layer as it has a linear decision boundary. is more complex as they form an XOR pattern and cannot be linearly separated, thus requiring further decomposition to be made sufficient for the GLM layer. To make correct decisions, the DNN should use one layer to decompose the XOR into lower level features, and directly transport ’s value to into the GLM layer.

2.2 Characterize low and high level features with feature leveling

From IXOR we can see that not all features have the same level of "complexity". Some could be directly fed into the GLM layer, others may need to go through one or more hidden layers to be transformed to features that can directly contribute to decision making.

Thus, instead of using "low" and "high" level to characterize features, we propose to frame the complexity of different features with the definition of feature leveling.

For a dataset consisting of i.i.d samples with features and their corresponding labels . We assume that samples contains features that requires at most hidden layers to be transformed to perform optimal inference.

For a DNN trained with K hidden layers and a GLM layer, we define the set of level feature as the set of features that requires hidden layers to extract under the current network setup to be sufficiently utilized by the GLM layer. In the following paragraphs, we denote as the level features extracted from one sample and denotes the set of all level feature to be learned in the target distribution. The rest of high level features are denoted by that should be passed to the layer to extract further level features. In this case, and should be disjoi[nt, that is . In the case of the toy example, is , level one feature, as it is learned by the first hidden layer to directly transport its value to the GLM layer. is . The XOR can be decomposed by one hidden layer with sufficient number of parameters to be directly used by the GLM layer to make accurate decisions. Assuming the first hidden layer has sufficient parameters, it should take in and output .

2.3 How the proposed model solves the mixture effect and boosts interpretation

However, common FCNN does not separate each level of feature explicitly. Figure 2 shows the heatmaps of the weight vectors for both FCNN baseline and proposed feature leveling network trained on the IXOR dataset. We observe from FCNN that

’s value is able to be preserved by the last column of the weight vector from the first layer but is mixed with all other features in the second layer, before passing into the GLM layer. Our proposed model, on the other hand, is able to cleanly separate and preserve its identity as an input to the GLM layer. In addition, our model is able to identify that the interaction between can be captured by one single layer. Thus, the model eliminates the second layer and pass features extracted by the first hidden layer directly to the GLM layer.

Looking at the results obtained from the toy example, we can clearly see that the proposed model is able to solve the mixture effect of features and gives out correct levels for features with different complexities in the context of the original problem. Therefore, the model is more interpretable in that it creates a clear path of reasoning and the contirbution of each level of features can be understood from the weight parameters in the GLM.

Figure 2: Weight heatmap of Baseline and proposed model with the initial architecture of 3-16-8-2. Arrows denotes information flow. in the proposed model is gated from mixing with other features input to the hidden layer.

3 Our proposed architecture

Inspired by our definition of feature leveling and to resolve the mixture of features problem, we design an architecture that is able to recursively filter the level features from the layer inputs and allow them to be directly passed to the final GLM layer.

We start with a definition of a FCNN and extend that to our model: we aim to learn a function parametrized by a neural network with hidden layers. The function can be written as:

(1)

is the hidden layer function with parameters . is the GLM model used for either classification, or regression. Thus, the goal is to learn the function such that:

(2)

In our formulation, each hidden layer can be viewed as separator for the level features and extractor for higher level features. Thus, the output of has two parts: is the set of level feature extracted from inputs and can be readily transported to the GLM layer for decision making. And is the abstract features that require further transformations by . In formal language, we can describe our network with the following equation (""denotes set subtraction):

(3)

In order for to learn mutually exclusive separation, we propose a gating system for layer , paramatrized by , that is responsible for determining whether a certain dimension of the input feature should be in or . For a layer with input dimension , the gate forms the corresponding gate where .

is the parameter that learns the probability for the gate

to have value 1 for the input feature at dimension to be allocated to and otherwise.

In order to maintain mutual exclusiveness between and , we aim to learn such that the it allows a feature to pass to if and only if the gate is exactly zero. Otherwise, the gate is 1 and the feature goes to . Thus, we can rewrite the neural network with the gating mechanism for the sample from the dataset:

(4)

Here, acts as element-wise multiplication. The function acts as a binary activation function that returns 1 if and only if the value of is 0 and 0 otherwise. The function allows level k feature to be filter out if and only if it does not flow into the next layer at all.

Then the optimization objective becomes:

(5)

With an additional regularization term to encourage less to pass into the next layer but more to flow directly to the GLM layer. act as a transformation function that maps the parameter to the corresponding gate value.

Figure 3: Illustration of the model with three hidden layers. Yellow denotes hidden layer that typically has ReLU activations and green denotes the level feature separated out by the gates. Thick arrows denote vector form of input and output. The dimension between the input of the hidden layers and the output can be different.

To achieve this discrete gate construction, we propose to learn the gating parameters under the context of regularization. To be able to update parameter values through backpropogation, we propose to use the approximation technique developed by Louizos et al. (2017) on differentiable regularization. We direct interested readers to the original work for full establishment of approximating and will summarize the key concept in terms of our gating mechanism below.

Although the gate value

is discrete and the probability for a certain gate to be 0 or 1 is typically treated as a Bernoulli distribution, the probability space can be relaxed by the following: Consider

to be a continuous random variable with distribution

paramaterized by . The gate could be obtained by transformation function as:

(6)

Then the underlying probability space is continuous because is continuous and can achieve exactly

gate value. The probability for the gate to be non-zero is calculated by the cumulative distribution function Q:

(7)

The authors furthers use the reparameterization trick to create a sampling free noise to obtain : with a differentiable transformation function , and thus is equivalent to where denotes function composition.

Then the objective function under our feature leveling network is:

(8)

4 Related work

Interpreting existing models: The ability to explain the reasoning process within a neural network is essential to validate the robustness of the model and to ensure that the network is secure against adversarial attacks Moosavi-Dezfooli et al. (2016); Brown et al. (2017); Gehr et al. (2018). In recent years, Many works have been done to explain the reasoning process of an existing neural network either through extracting the decision boundary Bastani et al. (2018); Verma et al. (2018); Wang et al. (2018); Zakrzewski (2001), or through a variety of visualization methods Mahendran and Vedaldi (2015); Zeiler and Fergus (2014); Li et al. (2015). Most of those methods are designed for validation purpose. However, their results cannot be easily used to improve the original models.

Self explaining models are proposed by Alvarez Melis and Jaakkola (2018) and it refers to models whose reasoning process is easy to interpret. This class of models does not require a separate validation process. Many works have focused on designing self-explaining architectures that can be trained end-to-endZhang et al. (2018); Worrall et al. (2017); Li et al. (2018); Kim and Mnih (2018); Higgins et al. (2017). However, most self-explaining models sacrifice certain amount of performance for interpretability. Two noticeable models among these models are able to achieve competitive performance on standard tasks while maintaining interpretability. The NIT framework Tsang et al. (2018) is able to interpret neural decision process by detecting feature interactions in a Generalized Additive Model style. The framework is able to achieve competitive performance but is only able to disentangle up to K groups of interactions and the value K needs to be searched manually during the training process. The SENN framework proposed by Alvarez Melis and Jaakkola (2018)

focuses on abstract concept prototyping. It aggregates abstract concepts with a linear and interpretable model. Compared to our model, SENN requires an additional step to train an autoencoding network to prototype concepts and is not able to disentangle simple concepts from more abstract ones in a per-layer basis.

Sparse neural network training refers to various methods developed to reduce the number of parameters of a neural model. Many investigations have been done in using or Han et al. (2015); Ng (2004); Wen et al. (2016); Girosi et al. (1995) regularization to prune neural network while maintaining differentiability for back propagation. Another choice for regularization and creating sparsity is the

regularization. However, due to its discrete nature, it does not support parameter learning through backpropagation. A continuous approximation of

is proposed in regard to resolve this problem and has shown effectiveness in pruning both FCNN and Convolutional Neural Networks (CNNs) in an end to end manner

Louizos et al. (2017)

. This regularization technique is further applied not only to neural architecture pruning but to feature selections

Yamada et al. (2018). Our work applies the regularization’s feature selection ability in a novel context to select maximum amount of features as direct inputs for the GLM layer.

5 Experiments

We validate our proposed architecture through three commonly used datasets - MNIST, California Housing and CIFAR-10. For each task, we use the same initial architecture to compare our proposed model and FCNN baseline. However, due to the gating effect of our model, some of the neurons in the middle layers are effectively pruned. The architecture we report in this section for our proposed model is the pruned version after training with the gates. The second to last layer of our proposed models is labeled with a star to denote concatenation with all previous

and the output of the last hidden layer. For example, in the California Housing architecture, both proposed and FCNN baseline start with as the initial architecture, but due to gating effect on deeper layers, the layer with neurons should have in effect neurons accounting for previously gated features. ( for , for ).

The two objectives of our experiments are: 1) To test if our model is able to achieve competitive results, under the same initial architecture, compared to FCNN baseline and other recently proposed self-explaining models. This test is conducted by comparing model metrics such as root mean square error (RMSE) for regression tasks, classification accuracy for multi-class datasets and area under ROC curve (AUC) for binary classification. 2) To test if the level features gated from the pre-GLM layer make similarly important contributions to the result as features extracted entirely through hidden layers. In order to account for how much each layer’s feature contribute to the final decision making, we propose to use the average of absolute values (AAV) of the final GLM layers weights on the features selected by the gates. If the AAV of each level’s features is similar, it shows that these features make similar influence on the final decision.

Experiment implementation details are deferred to supplemental.

5.1 Datasets & performances

MNIST California Housing
Model Architecture Accuracy Model Architecture RMSE
FCNN 784-300-100-10 0.984 FCNN 13-64-32-1 0.529
L0-FCNN Louizos et al. (2017) 219-214-100-10 0.986 GAM Tsang et al. (2018) - 0.506
SENN (FCNN) 784-300-100 0.963 NIT Tsang et al. (2018) 8-400-300-200-100-1 0.430
Proposed 291-300*-10 0.985 Proposed 10-28-32* -1 0.477
Table 1: MNIST classification and California Housing price prediction

The MNIST hand writing dataset LeCun et al. (2010) consists of pictures of hand written digits from 0 to 9 in grey scale format. We use a architecture for both FCNN baseline and the proposed model. This is the same architecture used in the original implementations of Louizos et al. (2017). Our model is able to achieve similar result, with less number of layers, as those state-of-the-art architectures using ReLU activated FCNNs . The feature gates completely eliminated message passing to the 100 neuron layer, which implies that our model only need level 1 and level 2 layers for feature extractions to learn the MNIST datasets effectively.

The California Housing dataset Pace and Barry (1997)

is a regression task that contains various metrics, such as longitude and owners’ age to predict the price of a house. It contains 8 features and one of the features is nominal. We converted the nominal feature into one-hot encoding and there are 13 features in total. Since California Housing dataset does not contain standard test set, we split the dataset randomly with 4:1 train-test ratio. Our proposed model could beat the FCNN baseline with the same initial architecture. Only 3 out of 13 original features are directly passed to the GLM layer, implying that California Housing’s input features are mostly second and third level.

Model Architecture AUC
FCNN 3072-2048-1024-2 0.855
GAM Tsang et al. (2018) - 0.829
NIT Tsang et al. (2018) 3072-400-400-1 0.860
SENN (FCNN) 3072-2048-1024-2 0.856
Proposed 3072-130- 1024*-2 0.866
Table 2: CIFAR-10 Binary

The CIFAR-10 Dataset Krizhevsky et al. (2014) consists of RGB images of 10 different classes. We test our model’s ability to extract abstract concepts. For comparison, we follow the experiments in the NIT paper and choose the class cat and deer to perform binary classification. The resulting architecture shows that for FCNN networks, most of the the two chosen classes are mainly differentiated through their second level features. None of the raw inputs are used for direct classification. This corresponds to the assumption that RGB images of animals are relatively high level features.

5.2 level feature passage and AAV of GLM weights

We also validate the percentage of input to the hidden layer, which is the level features selected by the gates. We also measures to what extent could these features contribute to the final decision(Figure 4). Through inspecting the percentage of features that flow to the GLM layer (the total number of gate with 1 as its value) and the AAV metric that we mentioned in the prior section, we notice that level features generally have similar, if not higher, AAV weights compared to the features extracted through all hidden layers. This implies that the level features are making similar contribution to the decision as those features extracted by FCNNs alone.

6 Strength in pruning redundant hidden layers

Due to our proposed model’s ability to encourage linearity, our model is also able to reduce its network complexity automatically by decreasing the number of hidden layers. Empirically, as training goes on, each layer observes increasing number of features flowing to the GLM. Thus, more features are transported directly to the GLM, reducing complexity of our model. This implies that our network is learning to use more features directly in GLM as opposed to transforming these features in further hidden layers.

We also observe that for some tasks such as MNIST classification, when the dataset feature level is less than the number of hidden layers, our proposed model can learn to prune excess hidden layers automatically as the network learns not to pass information to further hidden layers. As a result, the number of hidden layers are effectively reduced. Therefore, we believe that our framework is helpful for architectural design by helping researchers to probe the ideal number of hidden layers to use as well as understanding the complexity of a given task.


Figure 4: The percentage of gated features and average absolute weight (AAV) in GLM at different levels for all test models. Cal-Housing’s AAVs are scaled down for graphing clarity.
Figure 5: MNIST training performance curve and number of inputs passed to the following hidden layer (blue denotes the number of features passed to the firs hidden layer. Orange curve denotes the second).

7 Discussion

In this work we propose a novel architecture that could perform feature leveling automatically to boost interpretability. We use a toy example to demonstrate the fact that not all features are equal in complexity and most DNNs take mixed levels of features as input, decreasing interpretability. We then characterize absolute feature complexity by the number of layers it requires to be extracted to make GLM decision. To boost interpretability by isolating the level features. We propose feature leveling network with a gating mechanics and an end-to-end training process that allow the level features to be directly passed to the GLM layer. We perfrom various experiments to show that our feature leveling network is able to successfully separate out the level features without compromising performance.

There are two major directions for extension based on our proposed architecture: The first one is to extend our current construction to the context of convolutional neural networks. Another direction is to associate our network’s identity mapping of low level features with residual operations such as ResNet He et al. (2016), Highway Network Srivastava et al. (2015) and Dense Network Huang et al. (2017) and try to gain insights into their success. Hardt and Ma (2016).

References

  • Abadi et al. (2016) Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265–283, 2016.
  • Alvarez Melis and Jaakkola (2018) David Alvarez Melis and Tommi Jaakkola. Towards robust interpretability with self-explaining neural networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7775–7784. Curran Associates, Inc., 2018.
  • Bastani et al. (2018) Osbert Bastani, Yewen Pu, and Armando Solar-Lezama.

    Verifiable reinforcement learning via policy extraction.

    In Advances in Neural Information Processing Systems, pages 2494–2504, 2018.
  • Brown et al. (2017) Tom B Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. Adversarial patch. arXiv preprint arXiv:1712.09665, 2017.
  • Gehr et al. (2018) Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE Symposium on Security and Privacy (SP), pages 3–18. IEEE, 2018.
  • Girosi et al. (1995) Federico Girosi, Michael Jones, and Tomaso Poggio. Regularization theory and neural networks architectures. Neural computation, 7(2):219–269, 1995.
  • Han et al. (2015) Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pages 1135–1143, 2015.
  • Hardt and Ma (2016) Moritz Hardt and Tengyu Ma. Identity matters in deep learning. arXiv preprint arXiv:1611.04231, 2016.
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pages 770–778, 2016.
  • Higgins et al. (2017) Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, volume 3, 2017.
  • Huang et al. (2017) Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
  • Kim and Mnih (2018) Hyunjik Kim and Andriy Mnih. Disentangling by factorising. arXiv preprint arXiv:1802.05983, 2018.
  • Krizhevsky et al. (2014) Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The cifar-10 dataset. online: http://www. cs. toronto. edu/kriz/cifar. html, 55, 2014.
  • LeCun et al. (2010) Yann LeCun, Corinna Cortes, and CJ Burges. Mnist handwritten digit database. AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist, 2:18, 2010.
  • Li et al. (2015) Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. Visualizing and understanding neural models in nlp. arXiv preprint arXiv:1506.01066, 2015.
  • Li et al. (2018) Oscar Li, Hao Liu, Chaofan Chen, and Cynthia Rudin. Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    , 2018.
  • Louizos et al. (2017) Christos Louizos, Max Welling, and Diederik P Kingma. Learning sparse neural networks through regularization. arXiv preprint arXiv:1712.01312, 2017.
  • Mahendran and Vedaldi (2015) Aravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5188–5196, 2015.
  • Moosavi-Dezfooli et al. (2016) Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582, 2016.
  • Ng (2004) Andrew Y Ng. Feature selection, l 1 vs. l 2 regularization, and rotational invariance. In Proceedings of the twenty-first international conference on Machine learning, page 78. ACM, 2004.
  • Pace and Barry (1997) R Kelley Pace and Ronald Barry. Sparse spatial autoregressions. Statistics & Probability Letters, 33(3):291–297, 1997.
  • Srivastava et al. (2015) Rupesh K Srivastava, Klaus Greff, and Jürgen Schmidhuber. Training very deep networks. In Advances in neural information processing systems, pages 2377–2385, 2015.
  • Tsang et al. (2018) Michael Tsang, Hanpeng Liu, Sanjay Purushotham, Pavankumar Murali, and Yan Liu. Neural interaction transparency (nit): Disentangling learned interactions for improved interpretability. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 5804–5813. Curran Associates, Inc., 2018.
  • Verma et al. (2018) Abhinav Verma, Vijayaraghavan Murali, Rishabh Singh, Pushmeet Kohli, and Swarat Chaudhuri. Programmatically interpretable reinforcement learning. arXiv preprint arXiv:1804.02477, 2018.
  • Wang et al. (2018) Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Efficient formal safety analysis of neural networks. In Advances in Neural Information Processing Systems, pages 6367–6377, 2018.
  • Wen et al. (2016) Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in neural information processing systems, pages 2074–2082, 2016.
  • Worrall et al. (2017) Daniel E. Worrall, Stephan J. Garbin, Daniyar Turmukhambetov, and Gabriel J. Brostow. Interpretable transformations with encoder-decoder networks. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
  • Yamada et al. (2018) Yutaro Yamada, Ofir Lindenbaum, Sahand Negahban, and Yuval Kluger. Deep supervised feature selection using stochastic gates. arXiv preprint arXiv:1810.04247, 2018.
  • Zakrzewski (2001) Radosiaw R Zakrzewski. Verification of a trained neural network accuracy. In IJCNN’01. International Joint Conference on Neural Networks. Proceedings (Cat. No. 01CH37222), volume 3, pages 1657–1662. IEEE, 2001.
  • Zeiler and Fergus (2014) Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818–833. Springer, 2014.
  • Zhang et al. (2018) Quanshi Zhang, Ying Nian Wu, and Song-Chun Zhu. Interpretable convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8827–8836, 2018.

8 Revisit GLM for interpretations of deep neural networks

Consider training a linear model with dataset where is the set of features and is the corresponding set of labels. The goal is to learn a function from subject to a criteria function with parameter set .

In a classical setting of Linear Models, usually refers to a matrix such that:

(9)

Here, refers to the predicted label given a sample instance of a set of feature and T refers to the set of functions such as Logictic, Softmax and Identity. GLM is easy to interpret because the contribution of each individual dimension of x to the decision output y by its corresponding weight. Therefore, we hope to emulate GML’s interpretability in a DNN setting - by creating a method to efficiently back-trace the contribution of different features.

We argue that our proposed architecture is similar to a GLM in that our final layer makes decision based on the weights assigned to each level of input features. Our model is linear in relationship to various levels of features. Given k levels of features, our model makes decision with , each weight parameter indicates the influence of that layer. With this construction, we can easily interpret how each levels of feature contribute to decision making. This insight can help us to understand whether the given task is more "low level" or "high level" and thus can also help us to understand the complexity of a given task with precise characterization.

8.1 The last layer of common neural networks is a GLM layer

The "classical" DNN architecture consists of a set of hidden layers with non-linear activations and a final layer that aggregates the result through sigmoid, softmax, or a linear function. The final layer is in fact similar to the GLM layer since it itself has the same form and optimization objective.

9 Reproducing empirical results

9.1 General configuration

All models are implemented in TensorFlowAbadi et al. [2016]

and hyperparameters configurations could be found in our public repository or supplemental code. Model name with citation denotes that the result is obtained from the original paper. SEEN’s architecture listed is the prototyping network while we use similar architecture for autoencoder parts. All SENN models are re-implemented with fully connected networks for comparison purposes.

9.2 Dataset and preprocessing

MNIST is a dataset that contains 60000 training and 10000 testing of handwriting digits from 0 to 9. Experiment results were tested against the allocated testing set.

CIFAR-10 is a dataset consists of 10 classes of images each with 10000 training and 2000 testing. We used the allocated testing set for reporting results.

For MNIST, CIFAR-10, we rescaled the color channel with a divisor of 255., to make pixel values from 0 to 1.

For Cal Housing, we dropped all samples with any empty value entry. Normalize all numerical values with mean and standard deviation.

The IXOR dataset is generated with the script attached in the supplemental material under src/independentxor.

9.3 Hyperparameter

The only tunable hyperparameter in our model is the which we usually consider values from 0.5 to 0.01. All the values to display result is in the model scripts of the attached folder. Generally, lower are better for training more complicated dataset such as CIFAR-10 to prevent too many Gating at early stage.

9.4 Exact number of iteration runs

MNIST 280000
CIFAR-10 680000
California Housing 988000