YASENN: Explaining Neural Networks via Partitioning Activation Sequences

11/07/2018 ∙ by Yaroslav Zharov, et al. ∙ NYU college 0

We introduce a novel approach to feed-forward neural network interpretation based on partitioning the space of sequences of neuron activations. In line with this approach, we propose a model-specific interpretation method, called YASENN. Our method inherits many advantages of model-agnostic distillation, such as an ability to focus on the particular input region and to express an explanation in terms of features different from those observed by a neural network. Moreover, examination of distillation error makes the method applicable to the problems with low tolerance to interpretation mistakes. Technically, YASENN distills the network with an ensemble of layer-wise gradient boosting decision trees and encodes the sequences of neuron activations with leaf indices. The finite number of unique codes induces a partitioning of the input space. Each partition may be described in a variety of ways, including examination of an interpretable model (e.g. a logistic regression or a decision tree) trained to discriminate between objects of those partitions. Our experiments provide an intuition behind the method and demonstrate revealed artifacts in neural network decision making.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Over the last decade, the complexity of ML models increased significantly at the expense of their interpretability [Lipton2018]. The interpretability of the model is crucial for a majority of practical applications, including validation of self-driving car steering logic, gaining insights from a generative modelling of physical system dynamics and many others. In some domains, such as medicine and criminal justice, transparent decision making may be vitally important. In regulated areas, such as banking, interpretability is high-priority because of legislation compliance issues [Guidotti et al.2018b]. Therefore, a reliable interpretation method is very important in many business applications.

Interpretation of Deep Neural Networks (NN) is an active area of research, consisting of the following main directions. We examine each of them below together with their limitations.

  • Model-agnostic methods [Guidotti et al.2018b]: they include (but are not limited to) those methods that distill a NN with a simple and interpretable model [Ba and Caruana2014, Hinton, Vinyals, and Dean2015]. The main limitation is that they cannot make use of the knowledge of internal mechanics of the NN.

  • Methods explaining NNs on the basis of gradient information [Ancona et al.2018]. The main limitation is that they were reported to be unstable and sensitive to irrelevant features [Adebayo et al.2018, Kindermans et al.2017].

  • Case-based interpretation [Nugent and Cunningham2005] is “based on actual prior cases that can be presented to the user to provide compelling support” for the NN decision. The main limitation is that they may also be of a limited utility if an object description is high dimensional (especially for tabular data) or regulation authority does not allow such an interpretation.

  • An interpretability could also be achieved by restricting the NN to a special interpretable design [Zhang, Nian Wu, and Zhu2018]. Though appealing, this may limit the modelling power of a NN and, as a result, is of limited use for highly competitive business applications.

  • Learning an auxiliary NN to explain an original NN is another powerful approach [Barratt2017]. In our experience, however, these methods may not be transparent enough and therefore are not trustworthy.

To address some of these limitations, we present a novel approach to the interpretation of neural networks based on partitioning the set of sequences of neuron activations. We also developed the particular YASENN (Yet Another System Explaining Neural Networks) method that implements this approach.

The first step of YASENN consists of distilling a NN with an ensemble of gradient boosting decision trees of a special kind. The number of trees equals the depth of the NN, and each tree takes the neuron activations of the corresponding layer as input. Then for each object from the dataset, we collect the sequence of leaf indices from the ensemble trees and consider the obtained sequence as a code of that object. The key insight of our YASENN method comes from the sequential nature of both boosting and feed-forward NNs.

In the second step of the method, we describe the partitioning of the input space induced by those codes. One of the possible descriptions can be obtained using a human-interpretable model that discriminates between objects with different codes.

One of the distinguishing features of YASENN is that it does not restrict the architecture of a NN since it is based on distillation. To achieve model-specificity, it makes use of the internal mechanics of a NN. Still, another distinguishing feature of YASENN is that it has a deterministic and fast training procedure.

In addition to the basic algorithm, we also propose several powerful extensions. Namely, the method may be enhanced to the tasks with low tolerance to interpretation mistakes. It is also possible to provide an explanation for modified feature space or data manifold.

The contributions of this paper include:

  1. We propose a novel approach to the interpretation of neural networks based on partitioning the set of sequences of neuron activations.

  2. We develop the YASENN method that implements this approach using a distilling ensemble of a special design.

  3. We demonstrate a walkthrough of applying YASENN to several datasets from different domains.

2 Preliminaries

In this paper we focus only on the classification task with generalization to regression being obvious.

2.1 Neural Network

We consider the task of interpreting a neural feed-forward classifier

trained on the dataset , where each and

are a feature vector and a class label respectively. We also denote

for a one-hot encoding of

. Hereinafter we will drop the subscript if it does not introduce ambiguity.

Prediction of a NN with layers is defined as

(1)

where

  • is a map from the real-valued space to the unit simplex (usually, Softmax function).

  • is a layer of a , consisting of an affine map or convolution, nonlinearity (except ) and (optionally) a regularizer(s), such as Dropout, BatchNorm, Pooling, etc.

  • is the identity function (introduced for convenience of exposition).

Throughout this paper we treat as a deterministic function (e.g. NN in an inference mode).

2.2 Streams

For define to be an activation of the -th layer on an input .

A tuple of activations of a NN on an object 

(2)

is referred to as a stream (alluding to a sequence of transformations of ).

2.3 Decision Tree

Decision tree with leaves and -dimensional output is a function , where

  • is a matrix of scores is a score for the -th output of the -th leaf;

  • is an index function, assigning a leaf for a data point.

A leaf-index function associated with a decision tree may be used to compress

into a categorical variable

. Moreover, as each leaf is a connected rectangular region and the granularity of splitting depends on the variability of the target variable, the decision tree respects both input space and target proximities.

2.4 Distillation

Distillation assumes that one tries to use the output of the original complicated black-box (“teacher”) as a so-called soft target for another model (“student”), typically more simple, fast or interpretable and less cumbersome. In other words, one treats the given teacher model just as a deterministic function and optimizes the parameters of a student model to approach that function as precisely as possible on the specified transfer dataset.

One notable property of distillation is the immunity to overfitting to noise [Carlini and Wagner2017]. This is the case because, while distilling, one tries to reproduce the original deterministic contour lines with the model of another kind.

3 Method

Case-based reasoning and prototypes learning partition the input space into segments and assign a representative to each segment [Bien and Tibshirani2011, Kim, Rudin, and Shah2014]. In this paper, we follow a similar idea of segmentation. However, we do not select a strict representative for each segment. Instead, we propose to divide the input space into regions of low variation in NN prediction and find interpretable descriptions for these regions. To increase interpretation transparency, we propose to consider the final partitioning of the input space as clustering that preserves proximities (closeness) between objects both in the prediction and the input spaces.

To explain the decisions of a NN, we next examine the process of decision making, including the transformation of activations, before describing our YASENN method itself. Note that studying activations is a long-standing research topic. In particular, [Zeiler and Fergus2014] showed that neurons in convolutional NN activate on colours and geometric shapes that are interpretable for a human. Activations were also examined in a context of generalization [Morcos et al.2018, Morcos, Raghu, and Bengio2018], perceptual metrics [Zhang et al.2018], style transfer [Gatys, Ecker, and Bethge2016] and representation similarities [Morcos, Raghu, and Bengio2018].

We base our interpretation method on the notion of a stream, that captures the essential information about NN decision making by manifesting dependencies hidden in NN parameters. The concept of a stream was described above. A closely related concept was successfully used in [Papernot and McDaniel2018, Barratt2017, Lu, Setiono, and Liu1995].

Note that streams are difficult to work with directly due to the high dimensionality, continuity and absence of natural distance function capturing the geometry of this space. However, since the space of streams, deterministic transformations of the input space, is structured, its intrinsic dimension cannot be more than that of the input space. Therefore we maintain that streams can be compressed to a lower dimensional space, and that space can be subsequently partitioned and interpreted. To achieve this, we define the two steps of our method: compression and inspection, that are described below.

3.1 Step One: Compression

To utilize all the activations we propose to fit a distilling ensemble of trees – a separate tree for each but final layer of a NN. In particular, the -th tree takes as input and distills the part of the network from -th layer onward. The trees are linked in a way similar to boosting due to a heavily sequential nature of both models – the following module (tree or layer) learns such dependencies that were not yet captured with the previous ones.

We train the ensemble in gradient boosting manner with MSE loss function and raw logits

as a target variable with the only distinction from [Friedman2002]: each tree operates on the space of activations of the corresponding layer, not on the random subspace of features (see Algorithm 1).

We define a discretized stream of , , as a tuple of leaf indices of each tree in a distilling ensemble :

(3)

A discretized stream is actually a compressed stream with reduced both dimensionality and cardinality.

The set of unique discretized streams can also be enumerated in arbitrary order (e.g. lexicographic) and each is assigned a stream label – an index of a in the order.

input :  (dataset);
(neural network)
output :  (layer-wise boosting trees)
variables :  (matrices of size )
begin
    for  do // Constant prediction
      
   for  do
       for  do
            // Gradient
            // Activation
         
      // Fit the tree
       // Update prediction
      
   
Algorithm 1 Fitting the distilling ensemble

3.2 Step Two: Inspection

To understand the logic behind a particular partitioning we need a second step – inspection, which aims at identification of reasons behind the differences among discretized streams. Restriction of NN to be deterministic makes it possible to attribute such differences only to the input space, that is to differences in .

For the particular method used at this step, we refer to as the inspector.

For example, we can treat the stream label of as its class label and solve a multiclass classification problem with a decision tree as an inspector. Paths from the root of the fitted tree to its leaves will signify the rules discriminating stream labels in terms of object features. However, this approach may be impractical for a large number of stream labels. Inspector may vary depending on the task at hand or legislation requirements. In general, any method that provides an interpretable description of objects of the same stream label suffices. In this sense, possible inspectors include, but are not limited to

  1. Interpretable discriminative models (e.g. logistic regression, decision tree, rule mining)

  2. Exploratory techniques (e.g. averaging object features for each stream label, describing each region with a prototype, etc.).

Depending on the particular choice of the model, we may use it in either of ways:

  1. Self-descriptive (e.g. feature averaging) – describes the stream label without contrasting it with any other objects.

  2. Contrastive (e.g. logistic regression) – discriminates between objects of the stream label and some other group of objects.

The choice of the contrastive group (i.e. the negative class for discriminative modelling) is reminiscent of the baseline choice from gradient interpretation methods [Ancona et al.2018].

3.3 Discussion

Here we justify the design of our compression step.

3.3.1 Why an Ensemble?

One could suggest to distill a neural network with just a single decision tree assuming the stream as an input for the tree and to compress the stream by applying the leaf-index function.

But this naive approach may backfire: while first few activations are most interpretable since they are primitive transformations of the original feature vector; last activations contain much more information about the prediction and, finally, activation is the distillation target itself. A single distilling tree, thus, may exploit only the last activations (even if we remove from input), since they are most associated with NN prediction.

3.3.2 Why Linked Trees?

While usage of independent layer-wise trees is the straight-forward solution, it may cause some of the trees to learn similar dependencies. To utilize the limited number of trees more efficiently, we need to reduce this redundancy.

3.3.3 Why Logits?

While it is possible to train the ensemble with the prediction as a target, we note that it is more preferable to use raw logits as a target for the following reasons:

  1. Aggregation operation (e.g. Softmax) is not bijective, making it possible to lose information about equivalent (up to an additive constant) outputs.

  2. Exponentiation, which is an integral part of the usual Softmax, requires many deep trees for satisfactory distillation.

In addition, distilling raw logits instead of probability prediction drops the necessity to adapt the procedure to classification or regression.

4 Extensions

In this section, we explore some properties of YASENN and describe several ways to enhance the proposed method.

4.1 Input Space Modification

The inspector treats input space flexibly. To obtain the description, the input space can be modified to increase clarity. Instead of the input space seen by a NN one can feed in the space of aggregated metrics. For example, this could be the segmentation of images instead of raw pixels or aggregated metrics for tabular data (i.e. condensing the group of features into lower dimensional decorrelated set or quantiles).

Model-specific methods often lose this property (e.g. those relying on gradient information). YASENN, while being model-specific, preserves the ability to work with modified input spaces.

4.2 Adaptive Explanation

Alongside with input space change – input manifold restriction is also possible. Examples of restricting the input manifold may include narrowing the customer set (e.g. young and poor) as well as explaining adversarial examples for a particular NN [Goodfellow, Shlens, and Szegedy2014].

When operating with lack of data near the object of interest we could use the procedure of adaptive explanation: consequently shrink input manifold closer to the region of interest, sample new objects to this region and refit the distilling ensemble in this area.

4.3 Deliberate Interpretation

In practice, it is sometimes more important to explain most decisions extremely well than to make a uniformly modest explanation.

There are two potential problems that may cause the deliberately unreliable interpretation. We propose to use adaptive explanation to eliminate them. However, if unavailable, one can use criteria, described below.

4.3.1 High distillation error

Select the threshold of distillation error guided by either absolute value of the error or a fraction of the training set size. Remove the objects with the error violating that threshold from the training set of any inspector. Avoid providing interpretation for new objects of that kind.

Even if the compression is trained to minimize MSE, one can be more concerned with the distillation performance in the probability space. For this reason, we recommend to apply the (think of Softmax

) to ensemble predictions and estimate the discrepancy in probability domain (e.g. with a cross-entropy loss).

4.3.2 Underpopulated discretized streams

Select the low-populated discretized streams guided by either a threshold of minimal population per label or a fraction of the training set size. Avoid to train an inspector for such streams and to explain NN decision for their objects.

4.4 Anomaly Detection

The majority of possible discretized streams are not populated because of the nonrandom nature of the tree splits. It is often the case that conditionally on the previous split history an entropy of partitioning objects into leaves is far from minimum.

That property could be used for rough anomaly detection. In particular, if the discretized stream of an object of interest matches no discretized stream from the training set, chances are this object is anomalous and requires additional attention and/or special treatment. However, the converse does not hold in general: an object corresponding to a densely populated stream label can also be an anomaly.

4.5 Improved Ensemble

For the sake of better generalization we recommend to extend Algorithm 1 by fitting a multiplier for each tree as proposed in [Friedman2001].

If the combinatorial growth of the number of possible discretized streams is unwanted, it can be reduced by applying ensemble trees of increased complexity, e.g. oblique trees [Brodley and Utgoff1995, Murthy, Kasif, and Salzberg1994] or SVM trees [de Boves Harrington2015]. Oblique trees are especially relevant since a linear combination of activations in the nodes of the decision tree can additionally learn the decorrelation structure (similar to PCA). This linear combination may also depend on the previous split history, which is a lot more flexible then a simple PCA fitted once.

5 Experiments

We apply YASENN to various feed-forward NN architectures trained on datasets from different domains. In our experiments we would like to:

  1. showcase the application of deliberate interpretation and adaptive explanation

  2. show that the objects with different stream labels are less similar than those with the same label

  3. demonstrate that discretized streams could be used to provide some useful insights about the NN decision-making process.

Several possible metrics of interpretation quality have been proposed recently, but we cannot measure the quality of our interpretation with them for the following reasons.

  • Performance of distilling ensemble is a part of the interpretability-accuracy trade-off driven by the boosting trees hyperparameters on the one hand and limited human attention budget

    [Ribeiro, Singh, and Guestrin2016] on the number of streams on the other hand. We did not search for the optimal point of that trade-off and left it for further research. However, due to the greedy procedure of fitting decision trees, the ensembles presented in this section provide relevant partitioning of the input space despite they may be not the best ensembles of given complexity in terms of accuracy and fidelity.

  • Performance of the discriminative inspectors denotes the trustworthiness of the explanation provided by that inspector. However, the main target of this paper was to develop a method of stream space partitioning, and, therefore selection of the best kind of inspectors was out of scope. A researcher can select the best inspector according to unambiguity, complexity and input shift invariance proposed in [Lakkaraju et al.2017, Kindermans et al.2017]. We report the quality of inspectors in terms of ROC AUC to demonstrate the reliability of explanation.

None of the existing interpretation approaches can be directly compared with YASENN: It provides explanations somewhere between the local and the global scopes, describing stream in general, which is not a common case. Also, we cannot compare the proposed very different (but connected) steps of our procedure individually with independent baselines.

All NNs in our experiments were implemented with PyTorch

[Paszke et al.2017] and trained with Adam optimizer [Kingma and Ba2014], learning rate . To fit the distilling ensemble we used Scikit-learn [Pedregosa et al.2011] implementation of CART tree [Breiman et al.1984].

5.1 Gaussian Mixture

To provide intuition behind the compression step we consider a simple problem for which discretized streams can be visualized.

(a)
(b)
(c)
Figure 1:

Data was sampled from Gaussian mixture model and divided into two classes. ‘2-sigma’ ellipses of mixture components are shown. The horizontal dashed line represents a cut to be analyzed. (a) The probability of class 1 estimated with the NN. The more light the colour is, the higher the estimation is. (b) Points of the same colour have the same stream label. (c) A solid line represents the probability of class 1 along the profile cut estimated by the NN, a dashed one – by the distilling ensemble. Filled area marks the region where the cross-entropy between the predicted class distributions is above the threshold selected according to the training sample. This area mostly corresponds to the low-density region of the space.

5.1.1 Setting

We sampled 7,500 objects from four Gaussians and divided them into two classes. Negative class was assigned to the objects coming from left components and positive class – to those from the right components (Figure 1(a)). We also flipped class of 2% of randomly chosen objects.

A fully connected neural network [Linear - PReLU](5 times) - Linear

was trained for 1,000 epochs to solve this classification problem.

The depth of ensemble trees was set to one for layers 1-3 and to two for layers 4-5. We increased the depth of the last trees to simplify figures while preserving all the details.

5.1.2 Findings

The network learned a complicated decision boundary with artifacts at the bottom-left corner (Figure 1(a)).

The compression partitioned the space into regions of the same color (Figure 1(b)). Each colour corresponds to a discretized stream and is proportional to the probability predicted by the compression.

Some notable findings from Figure 1 (a,b) include the following.

  • Area of the same stream label may include disconnected regions due to splits on high-level interactions of input features.

  • The ensemble partitioned the space more frequently along the decision boundary, because of the high gradient of NN outputs with respect to its inputs in these areas.

The error between predictions of the NN and the ensemble was the highest in the regions of low data density – in the center and in the bottom-left corner (dark triangle regions in Figure 1(b)). To illustrate this we plot probability predictions (Figure 1(c)) along the dashed horizontal cut from Figure 1(a,b).

Guided by the procedure of deliberate interpretation, we identified the threshold isolating the of the biggest distillation errors on the training set. According to this threshold, the filled region in Figure 1(c) corresponds to uncertain distillation and, thus, to unreliable explanation. We recommend using adaptive explanation procedure to refine the distillation in this region.

5.2 Mnist

We illustrate the applicability of our method to convolutional NNs on the MNIST [LeCun and Cortes2010] classification problem. Experiments with MNIST include two settings described below.

5.2.1 Setting 1

A convolutional network Conv2d - ReLU - Conv2d - ReLU - Conv2d - ReLU - Flatten - Linear was trained for 5 epochs.

To illustrate adaptive explanation we fitted the distilling ensemble only on objects classified by NN as 7. The depth of ensemble trees was set to two. The minimum number of samples for each leaf of each tree was set to 2% of the ensemble training set.

We used a logistic regression as a one-vs-all inspector. For each discretized stream we fitted the inspector to distinguish between its objects and all other MNIST objects. Objects exceeding the threshold of 5% cross-entropy distillation discrepancy were excluded from the “one” component.

Figure 2: Three discretized streams that lead to the high probability of class 7 and signs of coefficients from respective inspectors (top row). The coefficients related to light-colored pixels have a positive sign and to dark-colored pixels – a negative one (bottom row).

5.2.2 Findings 1

The averaged images of three most indicative discretized streams are shown in the top row of Figure 2. They have a relatively high population (at least one hundred samples). One may observe that different discretized streams refer to different digit appearances. The pixels that are important for the fitted inspectors (bottom row of Figure 2) are human-interpretable. Since the inspectors have ROC AUC not less than 0.996, the explanation is satisfactory.

5.2.3 Setting 2

The same NN as in the previous setting was used. To illustrate the exploration of frequent misleading patterns we fitted a distilling ensemble on the objects misclassified by NN. The depth of ensemble trees was set to 1, 2, 2 and 2 respectively, with minimum impurity decrease set to 1, 0.9, 0.9 and 0.9.

5.2.4 Findings 2

(a) averaged picture
(b) misleading 4
(c) misleading 9
Figure 3: Showcase of misleading discretized stream containing objects that look like digits 4 and 9 at the same time.

Despite the NN performs well (test-time accuracy ) and there is no much misclassified data, we have found some persistent patterns. One of the discretized streams, shown in Figure 3, contains objects, which look like digits 4 and 9 at the same time. Figure 3(b) depicts an image which has probabilities for classes 9 and 4 of and respectively (as predicted by NN). Figure 3(c) depicts an image which has those probabilities equal to and respectively, while the ground-truth label is 9.

5.3 Imdb

Following [Chen et al.2018] we verify our method on the sentiment classification problem on IMDB dataset [Maas et al.2011]. Here we demonstrate that YASENN produces meaningful insights about the nature of network streams.

5.3.1 Setting

The dataset contains bag-of-words representations of movie reviews. The rare (term frequency ) and frequent (term frequency ) features were deleted through the preprocessing step. The resulting number of features is about 6,000 while the training data itself consists of 12,500 positive (class 1) and 12,500 negative (class 0) examples.

A feed-forward network Linear - Dropout - LeakyReLU - Linear - Dropout - LeakyReLU - Linear was trained for 10 epochs with batch size 64.

Layer-wise trees were fitted with the depth set to 1, 1 and 2 for each of the trees respectively for the purposes of good stream quality.

We used a -regularized logistic regression as an inspector for this experiment in all the cases described below.

(a) Presence of positive words
(b) Absence of negative words
Figure 4: Features with top-10 highest absolute value of coefficients from one-vs-all logistic regressions for two discretized streams leading to a high probability of the positive class.

5.3.2 Findings

We generated 16 discretized streams. To illustrate how different they can be in spite of close predictions, we selected 2 streams which had the ensemble prediction of the positive class close to 1. NN and the ensemble predict the same major class for 100% and 94% of training objects for the first and second discretized streams respectively. Population of each stream was more than 1,000.

To make intuition of which words characterize them, two one-vs-all inspectors were fitted for these streams (with ROC AUC 0.95 and 0.89 respectively), and we checked which features (i.e. words) were the most important for their respective inspectors.

Figure 4 presents 10 of the most important words for the selected streams. Note, that the absence of word ‘bad’ is the most deciding factor for both streams, while other important words are different not only by coefficient value but also by intent. The presence of positive words led to a high probability of the first selected discretized stream (a), and the absence of negative words led to a high probability of the second stream (b). It is a human-interpretable difference between them: the first stream contains explicitly positive reviews and the second one includes reviews that just do not have negative words.

This provides an insight that the network decision in the second stream may not be completely trustworthy since the NN relies mostly on the absence of negative terms.

6 Related Works

This section provides the connection between YASENN and existing interpretation methods.

Our method is closely connected to TREPAN [Craven and Shavlik1996], which extracts rules from NNs with a special kind of decision trees. Like this method, we also use trees to interpret the decision-making procedure. Unlike it, we use the internal structure of a NN to gain more knowledge.

[Guidotti et al.2018a] proposed LORE method for local rule extraction. LORE uses a special procedure to sample more data near the object of interest, which looks similar to our adaptive explanation extension.

Like [Tan et al.2018, Tan et al.2017, Che et al.2016] we distill a NN with a complex unexplainable model. But unlike it, we interpret the extracted partitioning instead of examining the student model.

Like a prototype classifier network [Li et al.2018] we partition the input space with respect to the intrinsic decision-making process. However, in general, YASENN does not return a prototype for each discretized stream. One can produce it in a way appropriate for the application area. Also, we do not restrict the NN to a special architecture.

The most relevant papers for our method are those that deal with NN activations. Here we describe only the crucial discrepancies between them and YASENN.

Unlike NeuroRule [Lu, Setiono, and Liu1995], we work with activations in a more general way: we take into account the whole layer rather than process neurons one by one. In addition, YASENN was designed for deep NNs, and it gains benefits of their depth.

Unlike InterpNET [Barratt2017], we explore activations in a layer-wise manner instead of considering them all at once.

Unlike DeepkNN [Papernot and McDaniel2018], which is the closest in spirit to our approach, we preserve the sequential essence of neuron activations and do not make any assumptions about the distance function in the layer activations space.

7 Conclusion and Future Work

We have presented YASENN method that incorporates the approach for interpreting NNs, proposed in the paper. This method has the following benefits:

  1. it derives from distillation the applicability to the modified input manifold and the resistance to overfitting to noise

  2. our method is not constrained to a single type of inspector

  3. YASENN gains information from intrinsic network transformations

  4. deliberate interpretation is available for the areas which require strong control

  5. the proposed method uses decision trees in the compression step and therefore is deterministic and has low computational complexity.

We have empirically tested YASENN on several diverse data types and showcased its ability to find useful information about the NN.

Although it has nice features, YASENN also has the following limitations. First, it is sensitive to a combinatorial growth of the number of possible discretized streams. The second problem is the determination of boosting tree hyperparameters.

As a future research, we plan to work on how to apply YASENN to recurrent neural networks. We also plan to enhance YASENN with the usage of gradient information and more powerful tree-based algorithms. Finally, we are working to equip our method with certain distance functions in the space of discretized streams.

References

  • [Adebayo et al.2018] Adebayo, J.; Gilmer, J.; Goodfellow, I.; and Kim, B. 2018. Local explanation methods for deep neural networks lack sensitivity to parameter values.
  • [Ancona et al.2018] Ancona, M.; Ceolini, E.; Öztireli, C.; and Gross, M. 2018. Towards better understanding of gradient-based attribution methods for deep neural networks. In International Conference on Learning Representations.
  • [Ba and Caruana2014] Ba, J., and Caruana, R. 2014. Do deep nets really need to be deep? In Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N. D.; and Weinberger, K. Q., eds., Advances in Neural Information Processing Systems 27. Curran Associates, Inc. 2654–2662.
  • [Barratt2017] Barratt, S. 2017.

    InterpNET: Neural Introspection for Interpretable Deep Learning.

    ArXiv e-prints.
  • [Bien and Tibshirani2011] Bien, J., and Tibshirani, R. 2011. Prototype selection for interpretable classification. Ann. Appl. Stat. 5(4):2403–2424.
  • [Breiman et al.1984] Breiman, L.; Friedman, J. H.; Olshen, R. A.; and Stone, C. J. 1984. Classification and Regression Trees. Statistics/Probability Series. Belmont, California, U.S.A.: Wadsworth Publishing Company.
  • [Brodley and Utgoff1995] Brodley, C. E., and Utgoff, P. E. 1995. Multivariate decision trees. Machine Learning 19(1):45–77.
  • [Carlini and Wagner2017] Carlini, N., and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), 39–57.
  • [Che et al.2016] Che, Z.; Purushotham, S.; Khemani, R.; and Liu, Y. 2016. Interpretable Deep Models for ICU Outcome Prediction. AMIA Annu Symp Proc 2016:371–380.
  • [Chen et al.2018] Chen, J.; Song, L.; Wainwright, M.; and Jordan, M. 2018. Learning to explain: An information-theoretic perspective on model interpretation. In Dy, J., and Krause, A., eds., Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, 883–892. Stockholmsmässan, Stockholm Sweden: PMLR.
  • [Craven and Shavlik1996] Craven, M., and Shavlik, J. W. 1996. Extracting tree-structured representations of trained networks. In Touretzky, D. S.; Mozer, M. C.; and Hasselmo, M. E., eds., Advances in Neural Information Processing Systems 8. MIT Press. 24–30.
  • [de Boves Harrington2015] de Boves Harrington, P. 2015. Support vector machine classification trees. Analytical chemistry 87 21:11065–71.
  • [Friedman2001] Friedman, J. H. 2001. Greedy function approximation: A gradient boosting machine. Ann. Statist. 29(5):1189–1232.
  • [Friedman2002] Friedman, J. H. 2002. Stochastic gradient boosting. Comput. Stat. Data Anal. 38(4):367–378.
  • [Gatys, Ecker, and Bethge2016] Gatys, L.; Ecker, A.; and Bethge, M. 2016. A neural algorithm of artistic style. Journal of Vision 16(12).
  • [Goodfellow, Shlens, and Szegedy2014] Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and Harnessing Adversarial Examples. ArXiv e-prints.
  • [Guidotti et al.2018a] Guidotti, R.; Monreale, A.; Ruggieri, S.; Pedreschi, D.; Turini, F.; and Giannotti, F. 2018a. Local rule-based explanations of black box decision systems. CoRR abs/1805.10820.
  • [Guidotti et al.2018b] Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; and Pedreschi, D. 2018b. A survey of methods for explaining black box models. ACM Comput. Surv. 51(5):93:1–93:42.
  • [Hinton, Vinyals, and Dean2015] Hinton, G.; Vinyals, O.; and Dean, J. 2015. Distilling the Knowledge in a Neural Network. ArXiv e-prints.
  • [Kim, Rudin, and Shah2014] Kim, B.; Rudin, C.; and Shah, J. A. 2014. The bayesian case model: A generative approach for case-based reasoning and prototype classification. In Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N. D.; and Weinberger, K. Q., eds., Advances in Neural Information Processing Systems 27. Curran Associates, Inc. 1952–1960.
  • [Kindermans et al.2017] Kindermans, P.-J.; Hooker, S.; Adebayo, J.; Alber, M.; Schütt, K. T.; Dähne, S.; Erhan, D.; and Kim, B. 2017. The (un)reliability of saliency methods. CoRR abs/1711.00867.
  • [Kingma and Ba2014] Kingma, D. P., and Ba, J. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980.
  • [Lakkaraju et al.2017] Lakkaraju, H.; Kamar, E.; Caruana, R.; and Leskovec, J. 2017. Interpretable & explorable approximations of black box models. CoRR abs/1707.01154.
  • [LeCun and Cortes2010] LeCun, Y., and Cortes, C. 2010. MNIST handwritten digit database.
  • [Li et al.2018] Li, O.; Liu, H.; Chen, C.; and Rudin, C. 2018. Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. In

    AAAI Conference on Artificial Intelligence

    .
  • [Lipton2018] Lipton, Z. C. 2018. The mythos of model interpretability. Queue 16(3):30:31–30:57.
  • [Lu, Setiono, and Liu1995] Lu, H.; Setiono, R.; and Liu, H. 1995. Neurorule: A connectionist approach to data mining. In Proceedings of the 21th International Conference on Very Large Data Bases, VLDB ’95, 478–489. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
  • [Maas et al.2011] Maas, A. L.; Daly, R. E.; Pham, P. T.; Huang, D.; Ng, A. Y.; and Potts, C. 2011.

    Learning word vectors for sentiment analysis.

    In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, 142–150. Portland, Oregon, USA: Association for Computational Linguistics.
  • [Morcos et al.2018] Morcos, A. S.; Barrett, D. G.; Rabinowitz, N. C.; and Botvinick, M. 2018. On the importance of single directions for generalization. In International Conference on Learning Representations.
  • [Morcos, Raghu, and Bengio2018] Morcos, A. S.; Raghu, M.; and Bengio, S. 2018. Insights on representational similarity in neural networks with canonical correlation. CoRR abs/1806.05759.
  • [Murthy, Kasif, and Salzberg1994] Murthy, S. K.; Kasif, S.; and Salzberg, S. 1994. A system for induction of oblique decision trees. J. Artif. Intell. Res. 2:1–32.
  • [Nugent and Cunningham2005] Nugent, C., and Cunningham, P. 2005. A case-based explanation system for black-box systems. Artificial Intelligence Review 24(2):163–178.
  • [Papernot and McDaniel2018] Papernot, N., and McDaniel, P. 2018. Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning.
  • [Paszke et al.2017] Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; and Lerer, A. 2017. Automatic differentiation in pytorch.
  • [Pedregosa et al.2011] Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; Vanderplas, J.; Passos, A.; Cournapeau, D.; Brucher, M.; Perrot, M.; and Duchesnay, E. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12:2825–2830.
  • [Ribeiro, Singh, and Guestrin2016] Ribeiro, M. T.; Singh, S.; and Guestrin, C. 2016. ”why should I trust you?”: Explaining the predictions of any classifier. CoRR abs/1602.04938.
  • [Tan et al.2017] Tan, S.; Caruana, R.; Hooker, G.; and Lou, Y. 2017. Distill-and-compare: Auditing black-box models using transparent model distillation.
  • [Tan et al.2018] Tan, S.; Caruana, R.; Hooker, G.; and Gordo, A. 2018. Transparent model distillation.
  • [Zeiler and Fergus2014] Zeiler, M. D., and Fergus, R. 2014. Visualizing and understanding convolutional networks. In Fleet, D.; Pajdla, T.; Schiele, B.; and Tuytelaars, T., eds., Computer Vision – ECCV 2014, 818–833. Cham: Springer International Publishing.
  • [Zhang et al.2018] Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018.

    The unreasonable effectiveness of deep features as a perceptual metric.

    In

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    .
  • [Zhang, Nian Wu, and Zhu2018] Zhang, Q.; Nian Wu, Y.; and Zhu, S.-C. 2018.

    Interpretable convolutional neural networks.

    In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).