Understanding and interpreting the behavior of a trained neural net has gained increasing attention in the machine learning (ML) community. A popular approach is to interpret and visualize the behavior of specific neurons (People often randomly selected neurons from different layers to visualize.) in a trained network (olah2017feature), and there are several ways of doing this. For example, one could show which training data leads to the most positive or negative output from this neuron, or one could perform a deep-dream style visualization to see how to modify the input to greedily activate the neuron (olah2017feature). Such approaches are widely used and can give interesting insights into the function of the network. However, analysis of individual neurons is often ad-hoc since there is a large number of neurons in the network, with many distinct behaviors and it’s typically not clear which ones to investigate. Moreover, it’s not clear how relevant is the behavior of specific neurons to the function of the entire network.
In this paper, we propose a new framework to address these limitations by systematically and efficiently identifying the neurons that are the most important contributors to the network’s function, combined with methods to interpret these neurons. The basis of this framework is a new algorithm we propose—Neuron Shapley—for quantifying the importance of each neuron in a trained network while accounting for complex interactions between neurons. Interestingly, for several standard image recognition tasks and architectures, a small number of fewer than 30 neurons (filters) are crucially necessary for the model to achieve good prediction accuracy. Interpretation of these critical neurons provides more systematic insights into how the network functions.
Neuron Shapley is a very flexible framework that can be applied to identify responsible neurons in different tasks. When applied to a facial recognition network that makes biased predictions for black women, Neuron Shapley identifies the (few) neurons that are responsible for this disparity. It also identifies neurons that are the most responsible for vulnerabilities to adversarial attacks. In addition to facilitating interpretation, this opens up an interesting opportunity to use Neuron Shapley for fast model repair without needing to retrain. For example, our experiments show that simply zeroing out the few “culprit” neurons reduces disparity without much degradation to the overall accuracy.
We summarize the contributions of this work here. Conceptual: we develop the Neuron Shapley framework to quantify the contribution of each neuron to the network’s performance. Algorithmic: we introduce a new multi-arm bandit based algorithm that efficiently estimates Neuron Shapley values in large networks. Empirical: our systematic experiments discover several interesting findings, including the phenomenon that a small number of neurons are critical to the performance of the convnet in common image classifications. This facilitates both model interpretation and repair.
Tracing a trained deep neural network’s behavior to its individual neurons (filters when the network is convolutional) is an important area of research in the interpretability literature. One approach is visualization by identifying human-understandable model inputs, which could be synthetic, that activate a specific set of neurons (simonyan2013deep; szegedy2013intriguing; szegedy2015going; erhan2009visualizing; mordvintsev2015inceptionism; mahendran2015understanding; nguyen2016multifaceted; olah2017feature). A related approach is to search through a database (typically of images) and find the representative examples that fire a chosen neuron (bau2017network; kim2017interpretability; ghorbani2019towards). It’s also possible to track the propagation of the signal in the network for a specific input example and quantify a given neuron’s role, though this is typically used to identify important input features (montavon2017explaining; binder2016layer; bach2015pixel; shrikumar2017learning).
Shapley value (shapley1953value)
has been studied in the cooperative game theory and economics(shapley1988shapley). It has also been explored for pruning relatively small models (stier2018analysing). Recent works have studied applications of Shapley value for ML interpretation. In those works, Shapley value is used to quantify the contribution of individual data points to the model’s training (ghorbani2019data; jia2019towards); or to quantify the features in the input data that are salient to the network’s prediction. These data/feature centric applications of Shapley value are very different from our approach of measuring the importance of neurons (instead of data), and they provide orthogonal interpretations to a neuron-based interpretation. Moreover, we introduce an adaptive multi-arm bandit algorithm for efficient estimation of the Shapley value which is novel to the best of our knowledge. This algorithm enables us, for the first time, to compute Neuron Shapley values in large-scale state-of-the-art convolutional networks. Model repair has recently been studied in the specific case of fairness applications, but these typically involve some retraining of the network (kim2019multiaccuracy). Repair through Neuron Shapley has the benefit of not requiring retraining, which is faster and is especially useful when access to a large amount of training data is hard for the end-user of the network.
2 Shapley Value for Neurons
We work with a trained ML model which is composed of a set of individual elements: . For example,
could be a fully convolutional neural network withlayers each with filters. The elements are its filters where
. We focus on convolutional networks in this paper because most of the interpretation works are done for images. The Neuron Shapley approach also applies to other network architectures as well as to other ML models such as Random Forests. The trained model is evaluated on a specific performance metric, which can be accuracy, loss, disparity on different racial groups, etc. The performance on the full model is denoted as . Our goal is to assign responsibility to each neuron . Mathematically, we formulate this task as one to partition the overall performance metric among its elements: is element ’s contribution towards such that . For simplicity, we will write to denote .
In order to evaluate the contribution of specific elements or neurons in the model, we would like to “zero out” these elements. In a convnet, this is done by fixing the output of the filter to be its mean output over a set of validation images. This will kill the flow of information through that filter while keeping the mean statistics of the propagated signal from that layer intact (which is not the case if the output is replaced by all zeros). We are interested in subsets (e.g. a sub network), and we write to denote the performance of the model when the elements in are zeroed out. Note that we do not retrain the model after certain elements are zeroed out, and all of the weights of the network are fixed. We simply take the modified network and evaluate its test performance . The reason for doing this is that even fine-tuning the network for each would be prohibitively expensive computationally.
Desirable properties for neuron valuation
For a given model and performance metric, there are many ways to partition among the elements in . This task is further complicated by the fact that different neurons in a network can have complex interactions. We take an axiomatic approach and use the Shapley value to compute because Shapley value is the unique partition of that satisfies several desirable properties. We list these properties below:
Zero contribution One decision to make is how to handle neurons that have no contribution. We say that a neuron has no contribution if . In words, this means that it does not change the performance when added to any subset of other neurons in the network. For such null neurons, the valuation should be . One simple example is a neuron with all-zero parameters.
Symmetric elements Two neurons should have equal contributions assigned if they are exchangeable under every possible setting: if . Intuitively if adding or to any subnetwork produce the same performance, then they should have the same value.
Additivity in Performance Metric In many practical settings, there are two or more performance metrics for a given model. For example measures it’s accuracy on one test point and is its accuracy on a second test point. A natural way to measure the overall performance of the model is having a linear combination of such metrics e.g. . We would like each neuron’s overall contributions to follow the linear relationship i.e. . A real-world example of additivity’s importance is a setting where computing the overall performance is not an option due to privacy concerns. Suppose we are providing a healthcare ML model to several hospitals. The hospitals are not allowed to share their test data with us or each other and can only compute each neuron’s contribution to their own local task and report the results back to us. Additivity allows us to gather the local contributions and aggregate them.
The following contribution formula uniquely satisfies all these properties (while satisfying ):
The formula says that a neuron’s contribution can be interpreted as its marginal contribution to the performance of every subnetwork of the original model (normalized by the number of subnetworks with the same cardinality ). This formula takes into account the interactions between neurons. As a simple example, suppose there are two neurons that improve performance only if they are both present or absent and harm performance if only one is present. Eqn. 1 considers all these possible settings to compute the contribution of each neuron. This, to our knowledge, is one of the few methods that take such interactions into account.
Eqn. 1 is equivalent to the Shapley value (shapley1953value; shapley1988shapley) originally defined for cooperative games. In a cooperative game, players are related to each other through a score function where is the reward if players in opt out. Shapley value was introduced as an equitable way of sharing the group reward among the players where equitable means satisfying the aforementioned properties. In our context, we will refer to in Eqn. 1 as Neuron Shapley. It’s possible to make a direct mapping between our setting and a cooperative game; therefore, proving the uniqueness of Neuron Shapley. The proof is discussed in appendix A.
3 Estimating Neuron Shapley
While Neuron Shapley has good properties, it is computationally expensive to compute. Computing the Shapley value in Eqn. 1 exactly requires an exponential number of operations, since there are exponential number of sets . In what follows, we discuss several techniques for efficiently approximating the Neuron Shapley values. We first rephrase the computational problem of Shapley values to a statistical one. We are then able to introduce approximation methods that result in orders of magnitude speed-ups.
For a model with elements, the Shapley value of the ’th component could be written as (ghorbani2019data):
is a uniform distribution overpermutations of the model elements and is the set of elements that appear before the ’th one in a given permutation (empty set if is the first element in ).
Following Eqn. 2, approximating the Shapley value
is equivalent to estimating the mean of a random variable. Therefore, for an arbitrary error bound, Monte-Carlo estimation can be used to give an unbiased approximation of. Error analysis of this method of approximation has been studied in the literature. (mann196large; castro2009polynomial; maleki2013bounding)
For a neural network with elements (i.e. filters), is the model’s performance on a finite test set of data points. As the number of elements gets smaller ( small), it’s expected for the network’s performance to drop. For a small enough , the model’s performance degrades to zero or negligible performance as more and more connections in the network are removed (see appendix B for more examples). By utilizing this fact, in a sampled permutation , we can abstain from computing the marginal contribution of elements appearing early on: if is small. In this work, we define a “performance threshold” below which the model is considered dead. This truncation can lead to substantial computational savings (close to one order of magnitude).
In most of our applications, we are more interested in accurately identifying the important contributing neurons in the model rather than measuring the exact Shapley values of every neuron. This is particularly relevant since, as we will see, there is typically only a sparse number of influential neurons and most of the values are close to zero. Algorithmically, our problem is simplified to finding the subset111the cardinality of this subset can be specified by the user or adaptively. of bounded random variables with the largest expected value from a set of bounded random variables. This can be formulated as a multi-armed-bandit (MAB) problem which has been successfully used in other settings to speed up computation (bagaria2018adaptive; jamieson2016non; li2017hyperband; zhang2019adaptive).
The MAB component of the algorithm is described in Alg. 1 and we explain the intuition here. For each neuron in the model, we keep tracking a lower and upper confidence bound (CB) on its value , which comes from standard estimation bounds. The goal is to confidently detect the top- neurons. Therefore, at each iteration, instead of sampling the marginal contribution of all neurons (as is the case for standard Monte Carlo approximation), we only sample for a subset of neurons i.e. the subset of neurons where the ’th value at that iteration is in between their lower and upper bounds. If there are no neurons satisfying the sampling condition, it means that the top- neurons are confidently separated (up to an error tolerance) from the rest. We show in appendix E that adaptive sampling results in a nearly one order of magnitude speedup. Although the algorithm adaptively samples to find the top- neurons, the estimated value for other neurons is in practice very close to the non-adaptive case (Spearman’s rank correlation ).
Combining our three approximation methods, we introduce a novel algorithm that we refer to as ”Truncated Multi Armed Bandit Shapley” (TMAB-Shapley). Details are in Alg. 1.
4 Experiments & Applications
We apply Neuron Shapley to two widely-used deep convolutional neural network architectures. First is the Inception-v3 (szegedy2016inception) architecture trained on the ILSVRC2012 (a.k.a ImageNet) (russakovsky2015imagenet) dataset (reported test accuracy ). We use Alg. 1
to compute the Neuron Shapley value for each of the 17216 filters preceding the logit layer in this network. We divide the released ImageNet validation set into two parts (25000 images each) to serve as validation and test sets. The second model is the SqueezeNet(iandola2016squeezenet) architecture that we trained on the celebA (liu2018large) dataset to detect gender from face images ( test accuracy). This model has a total of 2976 filters. In all the experiments, we set to detect the top- important filters. The results are robust to the choice of . We use empirical Bernstein (bernstein1; bernstein2) to compute the confidence bounds.
Neuron Shapley identifies a small number of critical neurons
We apply Alg. 1 to compute the Neuron Shapley value for all of the Inception-v3 filters. Here, we used the performance metric of the overall multi-class prediction accuracy of the network (on a randomly sampled batch of images) as to evaluate the Shapley values. Interestingly Neuron Shapley values are very sparse. We can evaluate the impact of the neurons with the largest Shapley values by zeroing them out in the Inception-v3. Removing just the top 10 filters and the overall test accuracy of Inception-v3 dropped from 74% to 38%; removing the top 20 neurons and the accuracy dropped to 8%. It is interesting that a handful of neurons have such a strong effect on the network’s performance. In contrast, removing 20 random neurons in the network or the 20 neurons with the largest average activation does not significantly reduce the accuracy of Inception-v3. A related phenomenon has shown that many connections/weights in the network can be removed without damaging performance (frankle2018lottery). The difference is that these works on pruning and sparsity have primarily focused on removing connections, while our experiment here is for removing neurons.
The sparse set of critical filters as identified by high Shapley values is a natural set of neurons to visualize and interpret. In Fig. 1
, we visualize the filter with the highest Shapley value in 7 of the layers of Inception-v3. We provide two types of visualizations: 1) Deep Dream images (first column of each block)222Deep Dream uses gradient ascent to directly optimize for activation of a filter’s response while adding small transformations (jittering, blurring, etc) at each step (olah2017feature).; 2) and the five images in the validation set that result in the most positive or most negative activation of the filter. The critical neurons in the earlier layers capture color (white vs. black) and texture (vertical stripes vs. smooth). The later layer critical neurons capture more complex concepts like colorfulness or crowdedness of the image which is consistent with previous findings using different approaches (kim2017interpretability; bau2017network; ghorbani2019towards). The final component of Fig. 1 shows how the top 100 filters with the highest Shapley values are distributed in different layers of Inception-v3. Overall more of these filters tend to be in the early layers, consistent with the notion that initial layers learn general concepts and the deeper layers are more class-specific. More results are discussed in Appendix C. We report similar experiment results for the Squeezenet model in Appendix D.
Class specific critical neurons
We can dive more deeply to investigate which neurons are the most responsible for class-specific predictions. For a given class (e.g. zebra, dumbbell), we use the class recall as the performance metric and apply Alg. 1 to detect the most important neurons for detecting that class. We then inspect filters with the largest contribution.
As before, Neuron Shapley discovers that a small number of filters are critical for class-specific predictions. We provide results for four representative classes (carousel, zebra, police van and dumbbell) in Fig. 2. In each of these classes removing top filters with the highest class-specific Shapley values lead to a dramatic decline in the network’s test accuracy for that class (Fig. 2a). For comparison, we also applied 4 popular alternative approaches for identifying important neurons—by filter size ( norm of the weights), norm of the filter’s response, leave-on-out impact (LOO; which is the change in the metric performance if just that filter is zeroed out). We also remove the top neurons identified by each of these alternative approaches. Overall, Neuron Shapley is more effective in finding critical neurons. We further report the network’s overall accuracy across all the classes; removing class-specific critical neurons does not affect the overall performance.
Fig. 2 visualizes two of the most critical neurons for each of the four classes—Deep Dream and the top five highest activating training images are shown for each filter. For the zebra class, diagonal stripes is a critical filter. For dumbbell, one critical neuron clearly captures dumbbell like shapes directly; the second captures people with arms visible, likely because that’s highly correlated with dumbbells in natural images (which is observed in previous literature (kim2017interpretability; ghorbani2019towards)). It’s also interesting to note most of the class-specific critical filters are located in the deeper layers (Fig. 2(c)), which is the opposite of the distribution of the generally important neurons.
Neuron Shapley is a flexible framework that can be used to identify neurons that are responsible for many types of network behavior beyond the standard prediction accuracy. We illustrate its usage on two important applications in fairness and adversarial attacks.
Discovering unfair filters
It has been shown that the gender detection models have certain biases towards minorities (buolamwini2018ppb): for example, they are less accurate on female faces and especially on black female faces. We took SqueezeNet trained on CelebA faces and evaluated its performance on the PPB dataset, which has an equal representation of four subgroups of gender-race (buolamwini2018ppb). Following previous works (kim2019multiaccuracy), we use the average recall on different subgroups as a measure of fairness and used this metric as the for evaluating Neuron Shapley. Alg. 1 is used to compute the Shapley value of each filter in SqueezeNet. In this case, we are most interested in the filters with the most negative values as they would decrease fairness and contribute to the disparity. Zeroing out these “culprit” filters greatly increased the gender classification accuracy on black female (BF) faces from to (Fig. 3). It also led to a substantial improvement for white females (WF). The average accuracy on PPB increased from to . The performance on the original CelebA data only dropped a little from this modification. This suggests the interesting potential of using Neuron Shapley for rapid model repair. Zeroing out filters can be much faster and easier than retraining the network, and it also does not require an extensive training set.
Identifying filters vulnerable to adversaries.
Deep neural networks are vulnerable to attacks. For example, an adversary can arbitrarily change the output of the model by adding imperceptible perturbations to the input image (goodfellow2014explaining; ghorbani2019interpretation). We apply Neuron Shapley to identify filters that are most vulnerable to attacks.
We take the Inception-v3 model trained on ImageNet. We use an adversary whose goal is to perturb each validation image so it’s misclassified as a randomly chosen class by Inception-v3. We use the iterative PGD attack (adversarial; madry2017towards) which is one of the most common adversarial attack methods. We allow the perturbation norm to be at most (for pixel values between 0 and 255) which is considered as the maximum allowed perturbation size in the literature (tramer2017ensemble).
The performance metric is the success rate of the adversary in fooling the network into predicting the randomly chosen labels on the validation data. The Neuron Shapley values are computed for each filter with respect to this . So a high value suggests that the filter is more targeted and leveraged by the adversary to produce misclassification. The rank correlation between these adversarial Shapley values and the original prediction accuracy Shapley values is just 0.3. This suggests that the network filters interact differently on the adversarially perturbed images than on the clean images.
We zero out the filters with the highest Shapley values (these are the most vulnerable filters). Removing just 16 filters and the adversary’s attack success rate drops from nearly 100% to nearly zero (0.1%). While the model’s performance on clean images drops more moderately from to . We note that while the modified network is robust to the original adversary, it is still vulnerable to a new adversary specifically designed to attack the modified network. This requires a white-box adversary who knows exactly which few neurons are zeroed out. We investigated several black-box adversaries—i.e. attacks that are developed on other datasets and which are not used to compute the Neuron Shapley value. The modified network is substantially more robust again these other black-box adversaries—their attack success rate drops by on average. This suggests that Neuron Shapley can potentially offer a fast mechanism to repair models against black-box attacks without needing to retrain.
The standard convnets, like the ones we use here, are typically trained without dropout regularization for conv layers (szegedy2016inception; szegedy2017inception_resnet; simonyan2014very). We hypothesize that adding dropout could substantially change the Shapley values because it encourages filters to be more independent. To test this, we train a second SqueezeNet on CelebA and use (filter) dropout throughout its training (). This model is accurate on test images, which is slightly lower than the non-dropout model. We compute the Neuron Shapley values in the new model. The values with dropout are more concentrated around zero (Fig. 4(a)). As expected, the dropout Squeezenet is also more robust to the removal of high-value filters (Fig. 4), presumably, because dropout encourages more redundancy. Though it’s interesting that even with dropout, removing just the top 30 filters can completely diminish the network’s performance, suggesting that there’s still a small number of critical neurons.
We introduce Neuron Shapley, a post-training method to quantify individual neuron’s contribution to the network’s performance. Neuron Shapley is theoretically principled due to its connection from game theory and is well-suited to disentangling the interactions of different neurons. We show that using Neuron Shapley we are able to discover a sparse structure of critical neurons both on the class-level and the global-level. The model’s behavior is largely dependant on the presence of the critical neurons. We can utilize this sparsity to apply post-training fixes to the model without any access to the training data; e.g. we can make the model more fair towards specific subgroups or less fragile against adversarial attacks just by removing a few responsible neurons. This opens interesting new approaches to model repair that deserves further investigation. A drawback of the Neuron Shapley formulation is its large computational cost. We have introduced a novel multi-arm bandit algorithm to reduce that cost by orders of magnitude. This enables us to efficiently compute Shapley values on widely used deep networks. Throughout this work, we have mainly focused on post-training edits to the model. One interesting future direction is to change the model given the neuron contributions and retrain it in an iterative fashion.
Appendix A Proof
The following proof is a direct map of the original Shapley value proof in the cooperative game theory setting (shapley1953value).
a.1 Neuron-Shapley satisfies the three desired properties
If we have a neuron that contributes nothing to any subset of the rest of neurons, by definition of Eqn. 1, its value would be zero.
If two neurons contribute exactly the same to any subset of the rest of neurons, again using Eqn. 1, they will have the same values by definition.
Assume we have . It follows that:
a.2 Proof of uniqueness
We show that any contribution scheme that satisfies the three desired properties is identical to Neuron-Shapley.
Consider a simple binary performance metric where for a subset of the neurons (), we have: if and otherwise. The contribution scheme has to divide among players while satisfying the three properties. By definition of the zero contribution property, a player wheres must have zero contribution. It’s also clear that any two players and are exchangeable and therefore should have equal contribution. Therefore, the only that satisfies the conditions is: if and otherwise.
Given all of the subsets and the simple performance metrics as defined above, we can write any performance metric as a linear combination of these simple metrics i.e. for any subset :
For an arbitrary we have:
the term inside parentheses is the binomial expansion of meaning that it is equal to one if and zero otherwise. The only case where while is when . Therefore:
Now considering that our should satisfy additivity in performance metric, for any player we must have:
Given the result of the previous lemma, we must have:
and by changing the order of summation we have:
for two subsets that only differ in ’th filter (i.e. , we have (as all the right-hand-side terms are the same except for ). It follows that:
and for each that contains , there are sets of filters such that . We have:
and finally we have:
Appendix B Early truncation
As mentioned in the main text, one method of approximation is to assign zero marginal contribution to filters that appear early on in a sampled permutation. In Fig.5 we sample random order of players and start removing filters one by one with the sampled order (100 times). As the figures suggest, for any coalition of filters that are not large enough, the performance is completely degraded. For the Inception-v3 model, removing around of its nearly filters is enough to degrade the model. In our experiments, we approximate the marginal effect of filters by zero whenever the network performance falls below . This gives us one order of magnitude speed-up as we will not perform the actual forward pass for more than of the filters in a sampled permutation in Alg. 1. The same happens for the Squeezenet model by removing around one-fifth of the filters (out of nearly filters in the model).
Appendix C Model interpretation through Neuron-Shapley
A More complete set of examples of important filters of Inception-V3 model are visuazlied in Fig. 6.
Appendix D Squeezenet Interpretation
Appendix E Alg. 1 Sample Efficiency
To investigate the speed-up effect of multi-armed-bandit trick for computing Shapley values, we run the original monte-carlo Shapley algorithm to compute the importance of filters in the squeezenet model. For both monte-carlo Shapley and TMAB-Shapley algorithms, we use empirical Bernstein (bernstein1; bernstein2) error bounds which has the benefit of using the empirical variance of the sampled variable. At iteration of Alg. 1, for the ’th filter that has appeared in for iterations, we have:
with probability at least . is the size of the range of ’th filter’s marginal contributions. Throughout this work, we assume minimum priors over the filters and therefore fix for all filters i.e. removing a filter will never result in a more than of drop (or increase) in accuracy. We run both algorithms for an error tolerance of . Fig. 8 depicts the results: (a) First we show the number of samples each algorithm requires for each filter (for better visualization, we rank filters based on their value). As it is seen, TMAB-Shapley is considerably more sample efficient. On average, it requires of the samples required for MC-Shapley; in other words, around times smaller number of forward passes on the model. (b) The histogram of the number of filters versus the number of samples shows that the MAB-Shapley algorithm requires considerably fewer number of samples for most of the filters while requiring a large number of samples for a small group of filters that have values close to that of ’th filter. (c) Empirically, it seems like although the TMAB-Shapley is not targeted towards accurate computation of values for all filters, the computed values are very close to the accurate values computed by MC-Shapley method (Rank correlation = , . This shows that empirical Bernstein is returning pessimistic error bounds which could be an interesting direction of research for future work.