1 Introduction
Wellfounded decisions by machine learning (ML) systems are critical for highstakes applications such as autonomous vehicles and medical diagnosis. Pathologies in models and their respective training datasets can result in unintended behavior during deployment if the systems are confronted with novel situations. For example, a recent medical image classifier for cancer detection attained high accuracy in benchmark test data, but was found to base its decision upon the presence of dermatologists’ rulers in an image (present when dermatologists already suspected cancer)
[24]. We define model overinterpretation to occur when a classifier finds strong classevidence in regions of an image that contain no semantically salient features. Overinterpretation is related to overfitting, but overfitting can be diagnosed via reduced test accuracy. Overinterpretation can stem from true statistical signals in the underlying dataset distribution that happen to arise from particular properties of the data source (such as the dermatologists’ rulers). Thus, overinterpretation can be harder to diagnose as it admits decisions that are made by statistically valid criteria, and models that use such criteria can excel at benchmarks.It is important to understand how hidden statistical signals of benchmark datasets can result in models that overinterpret or do not generalize to examples that stem from a different distribution. Computer vision (CV) research relies upon datasets like CIFAR10
[18] and ImageNet [28] to provide standardized performance benchmarks. Here, we analyze the overinterpretation of popular CNN architectures derived from these benchmarks to characterize pathologies.Revealing overinterpretation requires a systematic way to identify which features are used by a model to reach its decision. Feature attribution is addressed by a large number of interpretability methods, although they propose differing explanations for the decisions of a model. One natural explanation for image classification lies in the set of pixels that is sufficient for the model to make a confident prediction, even in the absence of information regarding what is contained in the rest of the image. In our example of the medical image classifier for cancer detection, one might identify the pathological behavior by realizing the pixels depicting the ruler alone suffice for the model to confidently output the same classifications. This idea of Sufficient Input Subsets (SIS) has been proposed to help humans interpret the decisions of blackbox models [4]
. An SIS subset consists of the smallest subset of features (e.g., pixels) that suffices to yield a class probability above a certain threshold after all other features have been masked.
Here we demonstrate that models trained on CIFAR10 and ImageNet can base their classification decisions on sufficient input subsets that only contain few pixels and lack human understandable semantic content. Nevertheless, these sufficient input subsets contain statistical signals that generalize across the benchmark data distribution, and we are able to train equally performing classifiers on CIFAR10 images that have lost 95% of their pixels. Thus, there exist inherent statistical shortcuts in this benchmark that a classifier solely optimized for accuracy can learn to exploit, instead of having to learn all of the complex semantic relationships between the image pixels and the assigned class label. While recent work suggests adversarially robust classifiers rely on more semantically meaningful features [13], we find these models suffer from severe overinterpretation as well. As we subsequently show, overinterpretation is not only a conceptual issue, but can actually harm overall classifier performance in practice. We find that single ensembling of multiple networks can mitigate overinterpretation, increasing the semantic content of the resulting SIS subsets. Intriguingly, the number of pixels in the SIS rationale behind a particular classification is often indicative of whether this image will be classified correctly or not.
It may seem unnatural to use an interpretability method that produces feature attributions which look uninterpretable. However, we do not want to bias extracted rationales towards human visual priors when analyzing a model for its pathologies, but rather want to faithfully report exactly those features used by a model. To our knowledge, this is the first analysis which shows that one can extract nonsensical features from CIFAR10 that intuitively should be insufficient or irrelevant for a confident prediction, yet these features alone are sufficient to train a classifier with a minimal loss of performance.
2 Related Work
There has been substantial research on understanding dataset bias in CV [36, 35] and the fragility of image classifiers when applied outside of the benchmark setting [26]. CNNs for image classification in particular have been conjectured to pick up on localized features like texture instead of more global features like object shape [6, 3]. Other research on deep image classifiers has also argued they heavily rely on nonsensical patterns [20, 14], and investigated this issue with artificiallygenerated patterns that are not in the original benchmark dataset. In contrast, we demonstrate the pathology of overinterpretation with unmodified subsets of actual training images, indicating the patterns are already present in the original dataset. Like us, [12]
also recently found that sparse pixel subsets suffice to attain high classification accuracy on popular image classification datasets. In natural language processing (NLP) applications, there has been a recent effort to explore model pathologies using a similar technique
[5], but this work does not analyze whether the semantically spurious patterns the models rely on are a statistical property of the dataset. Other research has demonstrated the presence spurious statistical shortcuts present in major NLP benchmarks, showing this problem is not unique to CV [21].3 Methods
3.1 Data
CIFAR10 [18] and ImageNet [29] have become two of the most popular image classification benchmarks. Nowadays, most classifiers are evaluated by the CV community based on their accuracy in one of these benchmarks.
We employ two additional datasets to evaluate the extent to which our CIFAR10 models can generalize to outofdistribution (OOD) images that stem from a different source than the training data. First, we use the CIFAR10.1 v6 dataset [25], which contains 2000 classbalanced images drawn from the Tiny Images repository [37] in a similar fashion to that of CIFAR10, though the authors of [25] found a large drop in classification accuracy on these images. Additionally, we use the CIFAR10C dataset [11], which contains variants of CIFAR10 test images altered by various corruptions (such as Gaussian noise, motion blur, and snow). Where computing sufficient input subsets on CIFAR10C images, we use a uniform random sample of 2000 images from the CIFAR10C set.
3.2 Models
For CIFAR10, we explore three common CNN architectures: a deep residual network with depth 20 (ResNet20) [9], a v2 deep residual network with depth 18 (ResNet18) [10], and VGG16 [31]
. We train these classifiers using crossentropy loss optimized via SGD with Nesterov momentum
[33] and employ standard data augmentation consisting of random crops and horizontal flips (additional details in Section S1). After training many CIFAR10 networks individually, we construct four different ensemble classifiers by grouping various networks together. Each ensemble outputs the average prediction over its member networks (specifically, the arithmetic mean of their logits). For each of three architectures, we create a corresponding homogeneous ensemble by individually training five copies of networks that share the same architecture. Each network has a different random initialization, which suffices to produce substantiallydifferent models despite the fact these replicate architectures are all trained on the same data
[22]. Our fourth ensemble is heterogeneous, containing all 15 networks (5 replicates of each of 3 distinct CNN architectures).3.3 Interpreting Learned Features
We interpret the feature patterns learned by our models using the sufficient input subsets (SIS) procedure [4], which produces rationales of a pretrained model’s decisionmaking by applying backward selection locally on individual examples. These rationales are comprised of sparse subsets of input features (pixels) on which the model makes the same decision as on the original input (with the rest of pixels masked), up to a specified confidence threshold.
More formally, let be a threshold for prediction confidence. Let predict that an image belongs to class with probability . Let be the total set of pixels. Then an SIS subset is a minimal subset of pixels such that where the information about the pixels is considered to be missing. We mask pixels in by replacement with the mean pixel value over the entire image dataset (equal to zero when the image data has been normalized), which is presumably least informative to a trained classifier [4]. We apply SIS to the function giving the confidence toward the predicted (most likely) class. We also develop an approximation of the backward selection procedure to efficiently scale the SISfinding procedure to higherresolution images from ImageNet (details in Section S5).
We produce sparse variants of CIFAR10 images where we retain the values of 5% of pixels in the image, while masking the remainder.
Our goal is to identify sparse pixel subsets that contain feature patterns the model identifies as strong classevidence as it classifies an image.
We identify such pixelsubsets by local backward selection on each image as in the BackSelect
procedure of SIS [4].
We apply backward selection to , which iteratively removes pixels that lead to the smallest decrease in .
Our 5% pixelsubset images contain the final 5% of pixels as ordered by backward selection (with their same RGB values as in the original image) while all other pixels’ values are replaced with zero.
3.4 Human Classification Benchmark
To evaluate whether sparse pixelsubsets of images can be accurately classified by humans, we asked four participants to classify images containing various degrees of masking. We randomly sampled 100 images from the CIFAR10 test set (10 images per class) that were correctly and confidently ( confidence) classified by our models, and for each image, kept only 5%, 30%, or 50% of pixels as ranked by backward selection (all other pixels masked). Backward selection image subsets are sampled across our three models. Since larger subsets of pixels are by construction supersets of smaller subsets identified by the same model, we presented each batch of 100 images in order of increasing subset size and shuffled the order of images within each batch. Users were asked to classify each of the 300 images as one of the 10 classes in CIFAR10 and were not provided training images. The same task was given to each user (and is provided in Section S4).
4 Results
4.1 CNNs Classify Images Using Spurious Features
We train five replicate models of each of our three architectures (ResNet20, ResNet18, VGG16) on the CIFAR10 training set (see Section 3.2). Table 1 shows the final model accuracies on the CIFAR10 test set and CIFAR10.1 and CIFAR10C (outofdistribution) test sets.
To interpret the behavior of these models, we apply the sufficient input subset (SIS) interpretability procedure [4] to identify minimal subsets of features in each image that suffice for the model to make the same prediction as on the full image (see Section 3.3). For SIS, we use a confidence threshold of 0.99 and mask pixels by replacement with zeros. Figure 1 shows examples of sufficient input subsets from a randomly chosen set of CIFAR10 test images, which are confidently and correctly classified by each model (additional examples in Section S2). Each SIS shown is classified by its corresponding model with confidence toward the predicted class. This result suggests that our CNNs confidently predict on images that appear nonsensical to humans (see Section 4.3), which leads to concern about their robustness and generalizability.
We observe that these sufficient input subsets are highly sparse and that the average SIS size at this threshold is % of each image, so we create a sparsified variant of all CIFAR10 images (both train and test). As in SIS, we apply backward selection locally on each image to rank pixels by their contribution to the predicted class (as described in Section 3.3). We retain 5% of pixels as ordered by backward selection on each image and mask the remaining with zeros. Note that because backward selection is applied locally on each image, the specific pixels retained differ across images.
We first verify that the original models are able to classify these sparsified images just as accurately as their full image counterparts (Table 1). Moreover, the predictions on the pixelsubsets are just as confident: the mean drop in confidence for the predicted class between original images and these 5% subsets is (std dev. ), (), and () computed over all CIFAR10 test images for our ResNet20, ResNet18, and VGG16 models, respectively, which suggests severe overinterpretation by each model (negative values imply greater confidence on the 5% subsets). We also find that these pixel subsets chosen through backward selection are more predictive than equally large pixelsubsets chosen uniformly at random from each image (Table 1), on which the models are unable to predict as accurately as on the original images or on the pixelsubsets found through backward selection. Figure 2 shows the frequency of each pixel location in the 5% backward selection pixelsubsets derived from each model across all CIFAR10 test images.
We additionally find that the SIS subsets for one model do not transfer to other models. That is, a sparse pixel subset which one model confidently classified is typically not confidently identified by the other models. For instance, 5% pixelsubsets derived from CIFAR10 test images using one ResNet18 model (which classifies them with accuracy) are only classified with , , and accuracy by another ResNet18 replicate, ResNet20, and VGG16 models, respectively. This result suggests there exist many different statistical patterns that a flexible model might learn to rely on, and thus CIFAR10 image classification remains a highly underdetermined problem. Producing highcapacity classifiers that make the right predictions for the right reasons may require clever regularization strategies and architecture design to ensure the model favors salient features over such sparse pixel subsets.
Model  Train On  Evaluate On  CIFAR10 Test Acc.  CIFAR10.1 Acc.  CIFAR10C Acc. 
ResNet20  Full Images  Full Images  
5% BS Subsets  
5% Random  
5% BS Subsets  5% BS Subsets  
5% Random  5% Random  
ResNet18 
Full Images  Full Images  
5% BS Subsets  
5% Random  
5% BS Subsets  5% BS Subsets  
5% Random  5% Random  
VGG16 
Full Images  Full Images  
5% BS Subsets  
5% Random  
5% BS Subsets  5% BS Subsets  
5% Random  5% Random  
Ensemble (5x ResNet18) 
Full Images  Full Images  
5% Random 
4.1.1 Analysis on ImageNet
We also find that models trained on the higherresolution images from ImageNet suffer from severe overinterpretation. As it is computationally infeasible to scale the original backward selection procedure of SIS [4] to ImageNet, we introduce a more efficient gradientbased approximation to the original SIS procedure that enables us to find sufficient input subsets on ImageNet images (details in Section S5). Figure 3 shows examples of images confidently classified by Inceptionv3, along with the corresponding SIS subsets that identify which pixels alone suffice to for the network to reach a similarly confident prediction (additional examples are provided in Figure S6). These sufficient input subsets appear visually nonsensical, yet the network nevertheless classifies them with confidence. Of great concern is the fact that nearly none of the SIS pixels are located within the actual object that determines the class label. For example, in the “pizza” image, the SIS is concentrated on the shape of the plate and the background table, rather than the pizza itself, which indicates that the model could generalize poorly when the image contains a different circular item on the table. In the “giant panda” image, the SIS contains bamboo, which likely appeared in the collection of ImageNet photos for this class. In the “traffic light” and “street sign” images, the SIS is focused on the sky, suggesting that autonomous vehicle systems that may depend on these models should be carefully evaluated for overinterpretation pathologies.
We randomly sample 1000 images from the ImageNet validation set that are classified with confidence and generate a heatmap of sufficient input subset pixel locations (Figure 4). Here, we use SIS subsets to generate the heatmap rather than 5% pixelsubsets. The SIS tend to be strongly concentrated along the image borders rather than near the center, suggesting the model relies too heavily on image backgrounds in its decisionmaking. This is a serious problem because objects corresponding to ImageNet classes are often located near the center of images, and thus this network fails to focus on salient features. The fact that the model confidently classifies the majority of images by seeing only their border pixels suggests it suffers from severe overinterpretation.
4.2 Sparse Subsets are Real Statistical Patterns
CNNs are known to be overconfident for image classification [8]
. Thus one might reasonably wonder whether the overconfidence on the semantically meaningless SIS subsets is an artifact of CNN overconfidence rather than a true statistical signal in the dataset. To probe this question, we evaluate whether the CIFAR10 sparse 5% image subsets contain sufficient information to train a new classifier to solve the same task. We run our backward selection procedure on all train and test images in CIFAR10 using one of our three model architectures (chosen at random). We then train a new model of the same type on these 5% pixelsubset variants of the CIFAR10 training images. We use the same training setup and hyperparameters as with the original models (see Section
3.2) without data augmentation of training images (results with data augmentation in Section S3). Note that we apply backward selection to the function giving the confidence of the predicted class from the original model, which prevents leaking information about the true class for misclassified images, and we use the true labels for training new models on pixelsubsets. As a baseline to the 5% pixelsubsets identified by backward selection, we create variants of all CIFAR10 images where the 5% pixelsubsets are selected at random from each image (rather than by backward selection). We use the same random pixelsubsets for training each new model.As shown in Table 1, models trained solely on these 5% backward selection image subsets can classify corresponding 5% test image subsets nearly as accurately as models trained and evaluated on full images. Models trained on random 5% pixelsubsets of images have significantly lower accuracy on test images (Table 1) compared to models trained on 5% pixelsubsets found through backward selection of existing models. This result suggests that the highly sparse subsets found through backward selection offer a valid predictive signal in the CIFAR10 benchmark that can be exploited by models to attain high test accuracy.
4.3 Humans Struggle to Classify Sparse Subsets
Table 2 shows the accuracy achieved by humans asked to classify our sparse pixel subsets (Section 3.4
). Unsurprisingly, there is strong correlation between the fraction of unmasked pixels in each image and human classification accuracy. Human classification accuracy on pixel subsets of CIFAR10 is significantly lower than accuracy when presented original, unmasked images (estimated around
in previous work [16]). Moreover, human accuracy on 5% pixelsubsets is very poor, though greater than purely random guessing. Presumably this effect is due to correlations between features such as color in images (for example, blue pixels near the top of an image may indicate a sky, and hence increase likelihood for certain CIFAR10 classes such as airplane, ship, and bird).However, CNNs (even when trained on full images to achieve accuracy on par with human accuracy on full images) can classify these sparse image subsets with very high accuracy (Table 1, Section 4.2). This indicates the benchmark images contain statistical signals that are unknown to humans. Models solely trained to minimize prediction error may thus latch onto these signals while still accurately generalizing to the test set, but such models may behave counterintuitively when fed images from a different source which does not share these exact statistics. The strong correlation (, Figure S5) between the size of pixel subsets found through backward selection and the corresponding human classification accuracy clearly suggests that larger subsets contain greater semantic content and more salient features. Thus, a model whose confident classifications have corresponding sufficient input subsets that are larger in size is presumably better than a model with smaller SIS subsets, as the former model exhibits less overinterpretation. We investigate this further in Section 4.4.
Fraction of Images  Human Classification Acc. (%) 

5%  
30%  
50% 
4.4 SIS Size is Predictive of Model Accuracy
Given that smaller SIS contain fewer salient features according to human classifiers, models that justify their classifications based on these sparse SIS may be limited in terms of attainable accuracy, particularly in outofdistribution settings. Here, we investigate the relationship between a model’s predictive accuracy and the size of the SIS subsets in which it identifies classevidence. For each of our three classifiers, we compute the average SIS size increase for correctly classified images as compared to incorrectly classified images (expressed as a percentage) for both the CIFAR10 test set and outofdistribution CIFAR10C test set. Figure 5 (A for CIFAR10 test set, B for CIFAR10C test set) shows that for varying SIS confidence thresholds, SIS subsets of correctly classified images are consistently significantly larger than those of misclassified images. This is especially striking in light of the fact that model confidence is uniformly lower on the misclassified inputs, as one would hope (Figure S3). Lower confidence would normally imply a larger SIS subset at a given confidence level, as one expects that fewer pixels can be masked before the model’s confidence drops below the SIS confidence threshold. Thus, we can rule out overall model confidence as an explanation of the smaller SIS in misclassified images. This result suggests that the sparse SIS subsets highlighted in this paper are not just a curiosity, but may be leading to bad generalizations on real images.
We notice similar behavior by comparing SIS subset size and model accuracy at varying confidence thresholds (Figure 6). Models with superior accuracy have higher SIS size and thus tend to suffer less from model overinterpretation.
Percentage increase in mean SIS size of correctly classified images compared to misclassified images across (a) the CIFAR10 test set and (b) a random sample of CIFAR10C test set. Positive values indicate larger mean SIS size for correctly classified images. Error bars indicate 95% confidence interval for the difference in means.
4.5 Pathologies in Adversarially Robust Models
Recent work has suggested semantics can be better captured via models that are robust to adversarial inputs, which fool standard neural networks via humanimperceptible modifications to images [19, 30]. Here, we find that models trained to be robust to adversarial attacks classify the highly sparse sufficient input subsets as confidently as the models in Section 4.1. We use a pretrained wide residual network provided by [19] that is adversarially robust for CIFAR10 classification (trained against an iterative adversary that can perturb each pixel by at most ). Figure 1 (“Adv. Robust”) shows examples of sufficient input subsets identified for a sample of CIFAR10 test images. The adversarially robust model classifies each SIS image shown with confidence. We find that the property of adversarial robustness alone is insufficient to prevent models from overinterpreting sparse feature patterns in CIFAR10, and these models confidently classify images that are indiscernible to humans.
4.6 Ensembling Mitigates Overinterpretation
Model ensembling is a wellknown technique to improve classification performance [7, 15]. Here we test whether ensembling alleviates the overinterpretation problem as well. We explore both homogeneous and heterogeneous ensembles of our individual models (see Section 3.2). We show that SIS subset size is strongly correlated with human accuracy on image classification (Section 4.3). Thus our metric for measuring how much ensembling can alleviate the problem is the increase in SIS subset size. Figure 6 shows that ensembling uniformly increases the model accuracy which is expected but also increases the SIS size (and given results from Section 3.4 on humans), mitigating the overinterpretation problem.
We conjecture that the cause of both the increase in the accuracy and SIS size for ensembles is the same. In our experiments we observe that SIS pixelsubsets are generally not transferable from one model to another — i.e., an SIS for one model is rarely an SIS for another (see Section 4.1). Thus, different models often consider independent pieces of evidence to arrive at the same prediction. Ensembling forces the consideration of the independent sources of evidence together for its prediction, increasing the accuracy of the prediction and forcing the SIS size to be larger by requiring simultaneous activation of multiple independently trained feature detectors. We find that the ensemble’s SIS are larger than the SIS of its individual members (examples in Figure S2).
5 Discussion
We find that state of the art image classifiers overinterpret small nonsensical patterns present in popular benchmark datasets, identifying strong class evidence in the pixel subsets that constitute these patterns. Despite their lack of salient features, these sparse pixel subsets are underlying statistical signals that suffice to accurately generalize from the benchmark training data to the benchmark test data. We found that different models rationalize their predictions based on different sufficient input subsets, suggesting that optimal image classification rules remain highly underdetermined by the training data. Models with superior accuracy tend to suffer less from model overinterpretation, which suggests that reducing overinterpretation can lead to more accurate models. In highstakes image classification applications, we recommend using ensembles of diverse networks rather than relying on just a single model.
Our results call into question model interpretability methods whose outputs are encouraged to align with prior human beliefs regarding proper classifier operating behavior [1]. Given the existence of nonsalient pixel subsets which alone suffice for correct classification, a model might solely rely on those patterns in its predictions. In this case, an interpretability method that faithfully describes the model should output these nonsensical rationales, whereas interpretability methods that bias rationales toward human priors may produce results that mislead users to think their models are behaving as intended.
Mitigating model overinterpretation and the broader task of ensuring classifiers are accurate for the right reasons remain significant challenges for ML. While we discovered ensembling tends to help, pathologies remain even for heterogeneous ensembles of classifiers. One alternative is to regularize CNNs by constraining the pixel attributions generated via a saliency map [27, 32, 38]. Unfortunately, such methods require a human image annotator that highlights the correct pixels as an auxiliary supervision signal. Furthermore, saliency maps have been shown to provide unreliable insights into the operating behavior of a classifier and must be interpreted as approximations [17]. In contrast, our SIS subsets constitute actual pathological examples that have been misconstrued by the model.
Future work should investigate regularization strategies and architectures to identify how to better learn semanticallyaligned features without explicit supervision. Imposing the right inductive bias is critical given the issue of underdetermination from multiple sets of nonsalient patterns that serve as valid statistical signals in benchmarks. Before deploying current image classifiers in critical situations, it is imperative to assemble benchmarks composed of a greater diversity of image sources in order to reduce the likelihood of spurious statistical patterns [2].
Acknowledgements
This work was supported by the National Institutes of Health [R01CA218094] and Schmidt Futures.
References
 [1] Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. In Advances in Neural Information Processing Systems, pages 9505–9515, 2018.
 [2] Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David LopezPaz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
 [3] Wieland Brendel and Matthias Bethge. Approximating cnns with bagoflocalfeatures models works surprisingly well on imagenet. arXiv preprint arXiv:1904.00760, 2019.

[4]
Brandon Carter, Jonas Mueller, Siddhartha Jain, and David Gifford.
What made you do this? Understanding blackbox decisions with
sufficient input subsets.
In
The 22nd International Conference on Artificial Intelligence and Statistics
, pages 567–576, 2019.  [5] Shi Feng, Eric Wallace, II Grissom, Mohit Iyyer, Pedro Rodriguez, Jordan BoydGraber, et al. Pathologies of neural models make interpretations difficult. arXiv preprint arXiv:1804.07781, 2018.
 [6] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Texture and art with deep neural networks. Current Opinion in Neurobiology, 46:178–186, 2017.
 [7] KingShy Goh, Edward Chang, and KwangTing Cheng. Svm binary classifier ensembles for image classification. In Proceedings of the tenth international conference on Information and knowledge management, pages 395–402. ACM, 2001.
 [8] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 1321–1330. JMLR. org, 2017.

[9]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Deep residual learning for image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pages 770–778, 2016.  [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pages 630–645. Springer, 2016.
 [11] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. Proceedings of the International Conference on Learning Representations, 2019.
 [12] Sara Hooker, Dumitru Erhan, PieterJan Kindermans, and Been Kim. A benchmark for interpretability methods in deep neural networks. In Advances in Neural Information Processing Systems, pages 9734–9745, 2019.
 [13] Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175, 2019.
 [14] Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175, 2019.
 [15] Cheng Ju, Aurélien Bibaut, and Mark van der Laan. The relative performance of ensemble methods with deep convolutional neural networks for image classification. Journal of Applied Statistics, 45(15):2800–2818, 2018.
 [16] Andrej Karpathy. Lessons learned from manually classifying cifar10. Published online at http://karpathy. github. io/2011/04/27/manuallyclassifyingcifar10, 2011.

[17]
PieterJan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T
Schütt, Sven Dähne, Dumitru Erhan, and Been Kim.
The (un) reliability of saliency methods.
In
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
, pages 267–280. Springer, 2019.  [18] Alex Krizhevsky et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
 [19] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
 [20] Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In CVPR, 2015.
 [21] Timothy Niven and HungYu Kao. Probing neural network comprehension of natural language arguments. ACL, 2019.
 [22] Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped dqn. In Advances in neural information processing systems, pages 4026–4034, 2016.
 [23] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, highperformance deep learning library. In Advances in Neural Information Processing Systems, pages 8024–8035, 2019.
 [24] Neel V. Patel. Why Doctors Aren’t Afraid of Better, More Efficient AI Diagnosing Cancer, Dec 22, 2017 (accessed Nov 11, 2019).
 [25] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do cifar10 classifiers generalize to cifar10? arXiv preprint arXiv:1806.00451, 2018.
 [26] Amir Rosenfeld, Richard Zemel, and John K Tsotsos. The elephant in the room. arXiv preprint arXiv:1808.03305, 2018.
 [27] Andrew Slavin Ross, Michael C Hughes, and Finale DoshiVelez. Right for the right reasons: Training differentiable models by constraining their explanations. arXiv preprint arXiv:1703.03717, 2017.
 [28] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015.
 [29] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015.
 [30] Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Andrew Ilyas, Logan Engstrom, and Aleksander Madry. Computer vision with a single (robust) classifier. arXiv preprint arXiv:1906.09453, 2019.
 [31] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556, 2014.
 [32] Becks Simpson, Francis Dutil, Yoshua Bengio, and Joseph Paul Cohen. Gradmask: Reduce overfitting by regularizing saliency. arXiv preprint arXiv:1904.07478, 2019.
 [33] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pages 1139–1147, 2013.
 [34] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.
 [35] Tatiana Tommasi, Novi Patricia, Barbara Caputo, and Tinne Tuytelaars. A deeper look at dataset bias. In Domain adaptation in computer vision applications, pages 37–55. Springer, 2017.
 [36] Antonio Torralba and Alexei A Efros. Unbiased look at dataset bias. In CVPR 2011, pages 1521–1528. IEEE, 2011.

[37]
Antonio Torralba, Rob Fergus, and William T Freeman.
80 million tiny images: A large data set for nonparametric object and scene recognition.
IEEE transactions on pattern analysis and machine intelligence, 30(11):1958–1970, 2008.  [38] Joseph D Viviano, Becks Simpson, Francis Dutil, Yoshua Bengio, and Joseph Paul Cohen. Underwhelming generalization improvements from controlling feature attribution. arXiv preprint arXiv:1910.00199, 2019.
S1 Details of Models and Training
Here we provide implementation and training details for the models used in this paper (Section 3.2). The ResNet20 architecture [9] has 16 initial filters and a total of 0.27M parameters. ResNet18 [10] has 64 initial filters and contains 11.2M parameters. Our VGG16 architecture [31]
uses batch normalization and contains 14.7M parameters.
All models are trained for 200 epochs with a batch size of 128. We minimize crossentropy via SGD with Nesterov momentum
[33] using momentum of 0.9 and weight decay of 5e4. The learning rate is initialized as 0.1 and is reduced by a factor of 5 after epochs 60, 120, and 160. Datasets are normalized using perchannel mean and standard deviation, and we use standard data augmentation training strategies [10].The adversarially robust model we evaluated is the adv_trained
model of [19], available on GitHub^{1}^{1}1https://github.com/MadryLab/cifar10_challenge.
To apply the SIS procedure to CIFAR10 images, we use an implementation available on GitHub^{2}^{2}2https://github.com/googleresearch/googleresearch/blob/master/sufficient_input_subsets/sis.py.
For confidently classified images on which we run SIS, we find one sufficient input subset per image using the FindSIS
procedure.
When masking pixels, we mask all channels of each pixel as a single feature.
S2 Additional Examples of CIFAR10 Sufficient Input Subsets
SIS of Individual Networks
Figure S1 shows a sample of SIS for each of our three architectures. These images were randomly sampled among all CIFAR10 test images confidently () predicted to belong to the class written on the left. SIS are computed under a threshold of , so all images shown in this figure are classified with probability confidence as belonging to the listed class.
SIS of Ensemble
Figure S2 shows examples of SIS from one of our model ensembles (a homogeneous ensemble of ResNet18 networks, see Section 3.2), along with corresponding SIS for the same image from each of the five member networks in the ensemble. We use a SIS threshold of , so all images are classified with confidence . These examples highlight how the ensemble SIS are larger and draw classevidence from the individual members’ SIS.
S3 Additional Model Performance Results
Training on PixelSubsets Without Data Augmentation
In Table S1, we present results akin to those in Section 4.2 and Table 1, but where the models here are trained on 5% pixelsubsets are trained with data augmentation. We find that training without data augmentation slightly improves accuracy when training models on 5% pixelsubsets.
Model  Train On  Evaluate On  CIFAR10 Test Acc.  CIFAR10.1 Acc.  CIFAR10C Acc. 
ResNet20  5% BS Subsets (+)  5% BS Subsets  
5% Random (+)  5% Random  
ResNet18 
5% BS Subsets (+)  5% BS Subsets  
5% Random (+)  5% Random  
VGG16 
5% BS Subsets (+)  5% BS Subsets  
5% Random (+)  5% Random  

Additional Analysis for SIS Size and Model Accuracy
Figure S3 shows the mean confidence of each group of correctly and incorrectly classified images that we consider at each confidence threshold (at each confidence threshold along the xaxis, we evaluate SIS size in Figure 5 on the set of images that originally were classified with at least that level of confidence). We find that as one would hope, model confidence is uniformly lower on the misclassified inputs.
S4 Details of Human Classification Benchmark
Here we include additional details on our benchmark of human classification accuracy of sparse pixel subsets (Section 3.4). Figure S4 shows all images shown to users (100 images each for 5%, 30% and 50% pixelsubsets of CIFAR10 test images). Each set of 100 images has pixelsubsets stemming from each of the three architectures roughly equally (35 ResNet20, 35 ResNet18, 30 VGG16). Figure S5 depicts the correlation between human classification accuracy and pixelsubset size.
S5 Scaling SIS to ImageNet
It is computationally infeasible to scale the original backward selection procedure of SIS [4] to ImageNet. As each ImageNet image contains pixels, running backward selection to find one SIS for an image would require billion forward passes through the network. Here we introduce a more efficient gradientbased approximation to the original SIS procedure (via Batched Gradient SIScollection, Batched Gradient BackSelect, and Batched Gradient FindSIS) that allows us to find SIS on larger ImageNet images in a reasonable time. The Batched Gradient SIScollection procedure described below identifies a complete collection of disjoint masks for an input , where each mask specifies a pixelsubset of the input such that . Here outputs the probability assigned by the network to its predicted class (i.e., its confidence).
The idea behind our approximation algorithm is twofold: (1) Instead of separately masking every remaining pixel to find the least critical pixel (whose masking least reduces the confidence in the network’s prediction), we use the gradient with respect to the mask as a means of ordering. (2) Instead of masking just 1 pixel at every iteration, we mask larger subsets of pixels in each iteration. More formally, let be an image of dimensions where is the height, the width, and the channel. Let be the network’s confidence on image and the target SIS confidence threshold. Recall that we only compute SIS for images where . Let be the mask with dimensions with 0 indicating an unmasked feature (pixel) and 1 indicating a masked feature. We initialize as all 0s (all features unmasked). At iteration , we compute the gradient of with respect to the input pixels and mask . Here is the current mask updated after each iteration. In each iteration, we find the block of features to mask, , chosen in descending order by value of entries in . The mask is updated after each iteration by masking this block of features until all features have been masked. Given input features, our Batched Gradient SIScollection procedure returns sufficient input subsets in evaluations of (as opposed to evaluations of in the original SIS procedure [4]).
We use in this paper, which allows us to find one SIS for each of 32 ImageNet images (i.e., a minibatch) in 12 minutes using Batched Gradient FindSIS. Note that while our algorithm is an approximate procedure, the pixelsubsets produced are real sufficient input subsets, that is they always satisfy . For CIFAR10 images (which are smaller in size), we use the original SIS procedure from [4]. For both datasets, we treat all channels of each pixel as a single feature.
Comments
There are no comments yet.