Interpreting Black Box Predictions using Fisher Kernels

Research in both machine learning and psychology suggests that salient examples can help humans to interpret learning models. To this end, we take a novel look at black box interpretation of test predictions in terms of training examples. Our goal is to ask `which training examples are most responsible for a given set of predictions'? To answer this question, we make use of Fisher kernels as the defining feature embedding of each data point, combined with Sequential Bayesian Quadrature (SBQ) for efficient selection of examples. In contrast to prior work, our method is able to seamlessly handle any sized subset of test predictions in a principled way. We theoretically analyze our approach, providing novel convergence bounds for SBQ over discrete candidate atoms. Our approach recovers the application of influence functions for interpretability as a special case yielding novel insights from this connection. We also present applications of the proposed approach to three use cases: cleaning training data, fixing mislabeled examples and data summarization.

READ FULL TEXT VIEW PDF

Authors

page 9

09/10/2020

Actionable Interpretation of Machine Learning Models for Sequential Data: Dementia-related Agitation Use Case

Machine learning has shown successes for complex learning problems in wh...
03/14/2017

Understanding Black-box Predictions via Influence Functions

How can we explain the predictions of a black-box model? In this paper, ...
10/12/2020

Explaining Neural Matrix Factorization with Gradient Rollback

Explaining the predictions of neural black-box models is an important pr...
02/09/2021

Demystifying Code Summarization Models

The last decade has witnessed a rapid advance in machine learning models...
06/27/2018

Piecewise Approximations of Black Box Models for Model Interpretation

Machine Learning models have proved extremely successful for a wide vari...
12/06/2018

Knockoff Nets: Stealing Functionality of Black-Box Models

Machine Learning (ML) models are increasingly deployed in the wild to pe...
10/07/2021

Influence Tuning: Demoting Spurious Correlations via Instance Attribution and Instance-Driven Updates

Among the most critical limitations of deep learning NLP models are thei...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

It has long been established that using examples to enable interpretability is one of the most effective approaches for human learning and understanding [21, 4, 15]. The ability to interpret using examples from the data can lead to more informed decision based systems and a better understanding of the inner workings of the model [17, 16]. In this work, we are interested in finding data points or prototypes that are “most responsible” for the underlying model making specific predictions of interest. To this end, we develop a novel method that is model agnostic and only requires an access to the function and gradient oracles.

In a more formal sense, we aim to approximate the empiricial test data distribution using samples from the training data. Our approach is to first embed all the points in the space induced by the Fisher kernels [13]. This provides a principled way to quantify closeness of two points with respect to the similarity induced by the trained model. If two points in this space are close, then intuitively the model treats them similarly. We formally show that influence function based approach to interpretability [17] is essentially doing the same thing.

Thus, our goal is to find a subset of the training data such that, when also embedded in a model-induced space, is close to the test set in the distribution sense. We build this subset from the training data sequentially using a greedy method called Sequential Bayesian Quadrature (SBQ) [22]

. SBQ is an importance-sampling based algorithm to estimate the expected value of a function under a distribution using discrete sample points drawn from it. To the best of our knowledge SBQ has not been used in conjunction with Fisher kernels for interpretability. Moreover, we leverage recent research in discrete optimization to provide novel convergence rates for the algorithm over discrete atomic sets. Our analysis also yields novel and more scalable algorithm variants of SBQ with corresponding constant factor guarantees.

Our key contributions are as follows:

  • We propose a novel method to select salient training data points that explain test set predictions for black box models.

  • To solve the resulting combinatorial problem, we develop new faster convergence guarantees for greedy Sequential Bayesian Quadrature on discrete candidate sets. One novel insight that results is the applicability of more scalable algorithm variants for SBQ with provable bounds. These theoretical insights may be of independent interest.

  • We recover the influence function based approach of Koh and Liang [17] as a special case. This connection again yields several novel insights about using influence functions for model interpretation and training side adversarial attacks. Most importantly, we establish the importance of the Fisher space for robust learning that can hopefully lead to promising future research directions.

  • To highlight the practical impact of the our interpretability framework, we present its application to three different real world use-cases.

Related work: There has been a lot of interest lately in model interpretation in various ways and their corresponding applications. Thus, we focus our related work on the subset of most closely related research. Our approach has a similar motivation as  Koh and Liang [17], who proposed the use of influence functions for finding the most influential

training data point for a test data point prediction. The intuition revolves around infinitesimally perturbing the training data point and evaluating the corresponding impact on the test point. The method is only designed for single data points – thus their extension to selecting multiple data points required an unmotivated heuristic approach. A complementary line of research revolves around feature based interpretation of models. Instead of focusing on choosing representative data points, the goal is to reveal which features are important for the prediction 

Ribeiro et al. [24]. Recently, Kim et al. [16] also made use of the unweighted MMD function to propose selection of prototypes and criticisms. While their approach can be used for exploratory analysis of the data, it has not been extended for explaining a model. Their focus, moreover, is on the use of criticisms in addition to examples as a vital component of exploring datasets.

Fisher kernels were proposed to exploit the implicit embedding of a structured object in a generative model for discriminative purposes [13], and have since been applied successfully in a variety of applications [23]

. The goal is to design a kernel for generative models of structured objects that captures the “similarity" for the said objects in the corresponding embedding space. The kernel itself can then be used out of the box in discriminative models such as Support vector machines.

2 Background

In this section, we provide an overview of the technical background required for our setup. We begin by fixing some notation. We represent sets using sans script fonts e.g. . Vectors are represented using lower case bold letters e.g. , and matrices are represented using upper case bold letters e.g. . Non-bold face letters are used for scalars e.g. and function names e.g. .

2.1 Fisher Kernels

The notion of similarity that Fisher kernels employ is that if two objects are structurally similar, then slight perturbations in the neighborhood of the fitted parameters , would impact the fit of the two objects similarly. In other words, the feature embedding , for an object can be interpreted as a feature mapping which can then be used to define a similarity kernel by a weighted dot product:

where the matrix is the Fisher information matrix. The information matrix serves to re-scale the dot product, and is often taken as identity as it loses significance in limit [13]. The corresponding kernel is then called the practical Fisher kernel and is often used in practice. We note, however, that dropping had significant impact on performance in our method, so we employ the full kernel. However, the practical Fisher Kernel is important to mention here. As we show in Section 5, using the practical Fisher Kernel recovers the influence function based approach to interpretability [17] as a special case. Another interpretation of the Fisher kernel is that it defines the inner product of the directions of gradient ascent over the Riemannian manifold that the generative model lies in [25].

While appropriate feature mapping is crucial for predictive tasks, we observe that it is also is vital for interpretability. Fisher kernels are ideal for our task because they seamlessly extract model-induced data similarity from trained model that we wish to interpret. To further motivate that such a task can not be trivially performed by a something like a parameter sweep over RBF kernels i.e. without supervision, we perform a simple toy experiment illustrated in Figure 1.

Figure 1: A toy experiment to illustrate the usefulness of Fisher space mapping. [Left] 1200 samples on U[1,2]

U[1,2] with two labels - Green and Red as illustrated. A specific green point X is selected for further experiment. [Mid] Closest 40 (Set A) and farthest 40 points (Set B) in terms of RBF kernel similarity. A distance based kernel such as RBF would yield these points as most and least similar to X respectively. [Right] Closest 40 (Set A’) and farthest 40 (Set B’) to X in terms of the Fisher kernel similarity computed from a fitted logistic regression model. The decision boundary for the logistic regression is also presented. It predicts everything below it as red, and everything above it as green. The Fisher “closeness” here takes into account the label of the points as well as the log-likelihood gradient on the contour of the loss function and its direction for each point. Note that for points exactly on the boundary, their gradient and Fisher similarity with all other points will be

.

2.2 Bayesian Quadrature

Bayesian quadrature [22] is a method used to approximate the expectation of a function by a weighted sum of a few evaluations of the said function. Say a function is defined on a measurable space . Consider the integral:

(1)

where are the weights associated with function evaluations at . Using and randomly sampling recovers the standard Monte Carlo integration. Other methods include kernel herding [3] and quasi-Monte carlo [7], both of which use but use specific schemes to draw . Bayesian quadrature allows one to consider a non-uniform given a functional prior for . The samples

can then be chosen as the ones that minimize the posterior variance 

[12] as we shall see in the sequel. The corresponding weights can be calculated directly from the posterior mean. We impose a Gaussian Process prior on the function as with a kernel function . The algorithm SBQ proceeds as follows. Say we have already chosen points: . The posterior of given the evaluations has the mean function:

where is the vector of function evaluations , is the vector of kernel evaluations , and is the kernel matrix with .

We now focus on sampling the points . The quadrature estimate provides not only the mean, but the full distribution as its posterior. The posterior variance can be written as:

where is the matrix formed by stacking , and the kernel function notation is overloaded so that represents the column vector obtained by stacking . The posterior over the function also yields a posterior over the expectation over defined in (1). For convenience, define the set . Say . Then, it is straightforward to see , where . Note that the weights in (1) can be written as .

We can write the variance of as:

(2)

The algorithm Sequential Bayesian Quadrature (SBQ) samples for the points in a greedy fashion with the goal of minimizing the posterior variance of the computed approximate integral:

3 Prototype Selection using Fisher Kernels

In this section, we present our method to select sample representatives using Fisher kernels. For a loss function , where are the parameters of the model and

is the data, to train a parametric model one would minimize the expected loss:

(3)

where is the data distribution. Since we usually do not have access to the true data distribution, is typically the empirical data distribution , where is if exists in the dataset, and otherwise, and is the size of the dataset. Our goal in this work is to approximate the integral (3) over the test or validation set (which specifies the distribution for us) using a weighted sum of a few points from the training dataset (1). Note that while the training samples in general have measure 0 in the test or the validation set distribution in the euclidean space, the smoothening GP prior over the embedding space still allows for samples to be generated from the former to approximate the latter.

For the kernel function in the GP prior in Bayesian Quadrature, we use the Fisher kernel of the trained parametric model. SBQ selection strategy inherently establishes a trade off between selecting data points that are representative of the parametric fit and diversity of the selected points. To see this, consider the SBQ cost function (2). At every new selection , one one hand, the cost function rewards the selection of data points which are clustered closer together in the feature mapping space to increase the value of

which in turn decreases variance. However, on the other hand, selecting points close to each other decreases the eigenvalues of

thereby increasing variance [12]. Thus, the SBQ seeks a tradeoff between these terms.

3.1 An Efficient Greedy Algorithm

In this section, we provide a practical greedy algorithm to select representative prototypes using SBQ to optimize (2). Note that the first term is constant w.r.t to . Moreover, . Thus, we can re-write for each in training and each in the test set. This can be pre-computed by a row or column sum over the kernel of the entire dataset in time and stored as vector of size to speed up later computation, where is the size of the training set and is the size of the test set. Our greedy cost function at step is thus:

(4)

The solution set is then updated as . The optimization (4) requires an inverse of the kernel matrix of already selected data points which can be computationally expensive. However, we can use the following result from linear algebra about block matrix inverses to speed up operations.

Proposition 1.

For an invertible matrix

, a column vector , and a scalar , let , then

Proposition 1 allows us to build the inverse of the kernel in (4) greedily. The full algorithm is presented in Algorithm 1.

1:  INPUT: Data , kernel function , number of selections to make
2:  //Pre-compute
3:   training and test
4:  // Build solution set greedily. Maintain current inverse(K) at each iteration as
5:  ,
6:  for  do
7:     ,
8:     for  do
9:        
10:        , , Get as the updated inverse using Prop. 1
11:        If , ,
12:     end for
13:     Write , ,
14:     Update: using Prop. 1,
15:  end for
16:  return
Algorithm 1 Greedy Prototype Selection

Algorithm 1 obviates the need for taking explicit inverses and only requires an oracle access to the kernel function. The algorithm itself is inherently embarrassingly parallelizable over multiple cores. We study guarantees for the algorithm in Section 4 which also motivates its more scalable variants.

4 Analysis

The greedy algorithm described in Algorithm 1 while being simple also has interesting optimization guarantees that make it attractive to use in practice. In this section, we provide convergence guarantees for the cost function (2) as increases. Typically for functions like these in the general case, the candidate set of atoms used to build the approximation is uncountably infinite - any possible sample from the underlying density is a candidate. As such, the convergence results are based on using Frank-Wolfe analysis on the marginal polytope [2]. However, for us the underlying set of candidate atoms are discrete points, which are at worst countably infinite. As such, for this special case, it is worth analyzing if we can provide better rates than the general available guarantees. It turns out that this is indeed possible. We are able to leverage recent research in discrete optimization to indeed provide a linear convergence rate for the forward greedy algorithm.

Recall our set optimization function (from (4)) is:

(5)

where is the set of candidate training data points. We write . For the RKHS induced by the kernel , we can equivalently re-write the cost function as [12, 2]:

(6)

For a matrix , the smallest (largest) -sparse eigenvalues is min (max) of under the constraints , and . Note that we can write . We present our convergence guarantee next.

Theorem 2.

Say is finite dimensional and has bounded norm i.e. , . Let be the smallest sparse eigenvalue and be the largest -sparse eigenvalues of the kernel matrix of the training set. If of size is the set returned by Algorithm 1 and of size is the optimal solution of (6), then if , .

Discussion:

Theorem 2 provides exponential convergence for the cost function . For the same objective, using Frank-Wolfe on the marginal polytope, the best known guarantees in the most general case are for finite dimensional bounded Hilbert spaces [2]. In the special case when the optimum lies in the relative interior, we do get faster exponential convergence. Theorem 2 provides an alternative condition that is sufficient for exponential convergence for the case when the optimum lies at the boundary of the marginal polytope instead of in its relative interior i.e. it is linear combination of atoms. The lower sparse eigenvalue condition is a union bound, and only requires to hold over the greedy selection set plus any sized subset.

4.1 Scalability

For massively large real world datasets, the standard greedy algorithm (SBQ) may be prohibitively slow. In addition to run time, there are also memory considerations. SBQ requires building and storing an sized kernel matrix over the training set of size . We can use alternative variants of the greedy algorithm that are either faster with some compromise on the convergence rate or can distribute the kernel over multiple machines. These variants are presented in Table 1 with their corresponding references. To the best of our knowledge, these variants have not been suggested for solving the problem (1) before and may be of independent interest. The convergence rates are obtained similar to the proof of Theorem 2 by plugging in respective approximation guarantees in lieu of Lemma 7 in the appendix.

Algorithm Runtime Memory required Convergence rate
SBQ (Algorithm 1)
Matching Pursuit [8]
-Stochastic Selection [14]
Distributed ( machines) [14]
Table 1: Greedy variants for prototype selection. , is the test set size, is the size of the training set. Convergence rate refers to number of iterations needed to get accuracy. For Stochastic and Distributed variants, the guarantee is in expectation.

5 Relationship with Influence functions

Influence functions [5] have recently been proposed as a tool for interpreting model predictions [17]. Since our goal is also the same, it is interesting to ask if there is a relationship between the two approaches. For selecting the most influential training point for a given test point, influence functions approximate infinitesimal upweighting of which training point has the most effect on prediction of the test point in question. In this section, we show that our method recovers this influence function approach used by Koh and Liang [17] for selecting influential training data points. In addition, we also show how adversarial training side attacks proposed by Koh and Liang [17] by perturbing features of training data points can be re-interpreted as a standard adversarial attack in the RKHS induced by the Fisher Kernel. Our analysis yields new insights about the influence function based approach and also establishes the importance of the Fisher space for robust learning.

5.1 Choosing training data points

We briefly introduce the influence function approach for model interpretation. For simplicity, we re-use the notation suggested by Koh and Liang [17]. Let be the test data point in question, be the training set, be the loss function fitted on the training set, be the optimizer of , be the Hessian of the loss function evaluated at , then the most influential training data point is the solution of the optimization problem:

(7)

We compare the two discrete optimization problems (5) and  (7). Even though (5) uses first order information only while (7) uses both first order and second order information about the loss function, the following proposition illustrates a connection.

Proposition 3.

If the loss function takes the form of a negative log-likelihood function, , where we have overloaded the notation .

Proof.

Let , since it takes form of a negative LL function. Then, since is the optimizer of ,

from which the result directly follows. ∎

From Proposition 1, it is easy to see that the optimization problems (5) and (7) are the same under some conditions. To be more precise, we can make the following statement. If the cost function is in the form of a negative log-likelihood function, (7) is a special case of (5) with the practical Fisher kernel (see Section 2.1) when the test set is of size 1, and .

This equivalence gives several insights about influence functions that were not known before: (1) it generalizes influence functions to multiple data points for both test and training sets in a principled way and provides a probabilistic foundation to the method, (2) it establishes the importance of the induced RKHS by the Fisher kernel by re-interpreting the influence function optimization problem as (see Lemma 4 in the appendix), (3) for negative LL functions, it renders the expensive the calculation of the Hessian in the work by Koh and Liang [17] as redundant since by Proposition 3, first order information suffices, (4) it provides theoretical approximation guarantees (see Lemma 7 in the appendix) for selection of multiple training data points, in constrast to  Koh and Liang [17] who made multiple selections greedily only as a heuristic.

5.2 Unified view of adversarial attacks

Given a test data point , an adversarial example is generated by adding a small perturbation as , where is a small perturbation of so that for is indistinguishable from by a human, but causes the model to make an incorrect prediction on  [9]. For training data attacks, is a training data point that is perturbed to make an incorrect prediction on a test data point. For a loss function , a test side attack for perturbing a test data point would solve the optimization problem:

(8)

While the optimization (8) is hard in general, typically a few iterations of projected gradient ascent or FGSM are applied. We refer to the recent work by Madry et al. [19] for details.

For training side attacks, Koh and Liang [17] perform the following iterative update:

(9)

where is a candidate training example to perturb in , is the target test example, is the projection operator onto the set of valid images, is a fixed step size, and .

Using the results in Section 5.1, it is straightforward to see that the if we use , where is the RKHS induced by the practical Fisher kernel, and change the constraint as a perturbation over a training example instead of the test example, we recover the iterative step (9) as a special case of projected gradient ascent steps to solve (8).

This equivalence provides a unified view of both training and test side attacks. As such, the large literature on robust learning against test side attacks can be applied to robustness against training side attacks as well. Moreover our framework also provides a principled way to do training side attacks to target multiple test set examples, instead of attacking individual test points separately.

6 Experiments

We present empirical use cases of our framework. We chose the experiments to illustrate the flexibility of our framework, as well as to emphasize its generalization capacity over and above influence functions. As such, we present experiments that make use of set influence (as opposed to single data point influence) for data cleaning and summarization (Sections 6.1,6.3). To illustrate potential benefit of using the full Fisher kernel as opposed to the simplified practical Fisher kernel as used by the influence functions, we present evaluation for a use case for fixing mislabelled examples as presented by Koh and Liang [17] (Section 6.2).

6.1 Data Cleaning: removing malicious training data points

In this section, we present experiments on the MNIST dataset to illustrate the effectiveness of our method in interpreting model behavior for the test population. Some of the handwritten digits in MNIST are hard even for a human to classify correctly. Such points can adversely affect the training of the classifier, leading to lower predictive accuracy. Our goal in this experiment is to try to identify some such misleading training data points, and remove them to see if it improves predictive accuracy. To illustrate the flexibility of our approach, we focus only on the digits

and in the test data which were misclassified by our model, and then select the training data points responsible for those misclassifications.

The MNIST data set [18] consists of images of handwritten digits and their respective labels. Each image is a pixel array. There are images in total, split into training examples and test examples. The digits are about evenly represented in both the training and the test data.

For the classification task, we use tensorflow 

[1] to build a 2 layer convolutional network with max pooling followed by a fully connected layer and the softmax layer. The convolutions use a stride of

followed by padding of zeros to match the input size. We use dropout to avoid overfitting. The network was trained using the built-in Adam Optimizer for

steps of batch size each. For the entire test set, we obtain an accuracy of , while for the subset of the test set consisting only of the chosen two digits and , the accuracy is .

(a) A subset of selected prototypes responsible for misclassifying s and s in the test set
(b) Accuracy fractions on test data s and s (Test49), and the full test set after removing random (Rand), algorithm selected (Sel), or Curated (Cur) prototypes.
Figure 2: MNIST experiment for selecting malicious training data points.

After the training is completed, we obtain the gradients of the training and test data points w.r.t the parameters of the network by passing each point through the trained (and subsequently frozen) network. The obtained gradient vectors are used to calculate the Fisher kernel as detailed in Section 2.1. We then employ Algorithm 1 using the newly built Fisher kernel matrix between training and test datasets to obtain the top prototypes i.e. data points from the training set that our algorithm deems most responsible for misclassifying s and s.

To check if these points are indeed misguiding the model, we remove the top of the selected points from the training data and retrain the model to retest on the test set. These numbers are reported as Sel50, Sel100, Sel200, Sel300 in Figure 1(b). Indeed we see an improvement in the test accuracy till Sel200 indicating the importance of removing the selected potential malicious points from the training set, and a subsequent decay in performance for Sel300 most likely due to removal of too many useful points in addition to malicious ones. To compare, we also remove the respective number of points randomly and repeat the experiment. Removal of random points from the training data led to a general decay in the predictive accuracy.

Finally, we manually selected points from the chosen points as the curated set based on how ill-formed the digits were (see Figure 1(a)). Removing these points from the training set before re-training and testing gives predictive accuracy is reported as Cur50 comparable to Sel100, but still worse than Sel200, indicating that the algorithm identified more malicious points in top-200 selected than our manually chosen points.

6.2 Fixing Mislabeled Examples

Figure 3: Comparison of SBQ compared to Influence functions on the task of fixing flipped labels.

In this experiment, we use our framework to detect and fix mislabeled examples. Labor intenstive labeling tasks naturally result in mislabels, especially in real-world datasets. These data points may cause poor performance and degradation of the model. We show that our method can be successfully used for this purpose, showing improvement over the recent results by Koh and Liang [17].

We use a small correctly labeled validation set to identify examples from the large training set that are likely mislabeled. We first train a classifier on the noisy training set, and predict on the validation set. We then employ Algorithm 1 to identify training examples that were responsible for making incorrect predictions on the validation set. The potentially mislabeled data points are then chosen by the output of our method. Curation is then simulated on the selected examples in order of selections made (similar to the approach by Koh and Liang [17]), and if the label was indeed wrong, it is fixed. We report on the number of training data points selected vs fixed (the precision metric for incorrectly labeled points) and the respective improvement in unseen test data accuracy.

For evaluation, we use enron1 email spam dataset used by Koh and Liang [17] and compare our results to their reported results. The dataset consists of training points and test points. We randomly select data points from the training set as the clean curated data. From the remaining training data points, we randomly flip the labels of of the data. We then use our method and the baselines to select several candidates for curation. We report the number of fixes made after these selections and the corresponding test predictive accuracy. The baselines are selection by (i) top self influence measures [17], and (ii) random selection of datapoints. The curation data is used as part of the training by all the methods. No method had access to the test data. As showing in Figure 3, our algorithm consistently performs better in test accuracy and the fraction of flips fixed as more and more data is curated.

6.3 Data Summarization

In this section, we perform the task of training data summarization. Our goal is to select a few data samples that represent the data distribution sufficiently well, so that a model built on the selected subsample of the training data does not degrade too much in performance on the unseen test data. This task is complimentary to the task of interpretation, wherein one is interested in selecting training samples that explain some particular predictions on the test set. Since we are interested in approximating the test distribution using a few samples from a training set with the goal of predictive accuracy under a given model, our framework of Sequential Bayesian Quadrature using Fisher kernels is directly applicable.

Another method that also aims to do training data summarization is that of coreset selection [11], albeit with a different goal of reducing the training data size for optimization speedup while still maintaining guaranteed approximation to the training likelihood. Since the goal itself is optimization speedup, coreset selection algorithms typically employ fast methods while still trying to capture the data distribution by proxy of the training likelihood. Moreover, the coreset selection algorithm is usually closely tied with the respective model as opposed to being a model-agnostic method like ours.

To illustrate that coreset selection falls short on the goal of competitively estimating the data distribution, we employ our framework to the problem of training data summarization under logistic regression, as considered by Huggins et al. [11] using coreset construction. We experiment using two datasets ChemReact and CovType. ChemReact consists of chemicals each of feature size . Out of these, are test data points. The prediction variable is and signifies if a chemical is reactive. CovType has webpages each of feature size . Out of these, are test points. The task is to predict whether a type of tree is present in each location or not.

In each of the datasets, we further randomly split the training data into validation and training. For the larger CovType data, we note that selecting about 20,000 training points out of the training set achieves about the same performance as the full set. Hence, we work with randomly selected 20,000 points for speedup. We train the logistic regression model on the new training data, and use the validation set as a proxy to the unseen test set. We build the kernel matrix and the affinity vector , and run Algorithm 1 for various values of . For the baselines, we use the coreset selection algorithm and random data selection as implemented by Huggins et al. [11]. The results are presented in Figure 4. We note that our algorithm yields a significantly better predictive performance compared to random subsets and coresets [11] with the same size of the training subset across different subset sizes.

Figure 4: Performance for logistic regression over two datasets (left is ChemReact while right is CovType) of our method (Fisher) vs coreset selection [11] and random data selection. ‘Full’ reports the numbers for training with the entire training set. Fisher achieves much better test LL performance than the baselines over several different subset sizes.

Conclusion:

This manuscript proposed a novel principled approach for examining sets of training examples that influence an entire test set given a trained black-box model – extending a notable recently proposed per-example influence to set-wise influence. We also presented novel convergence guarantees for SBQ and more scalable algorithm variants.. Empirical results were presented to highlight the utility of the proposed approach for black-box model interpretability and related tasks. For future work, we plan to investigate the use of model criticisms to provide additional insights into the trained models.

References

  • Abadi et al. [2016] Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: A system for large-scale machine learning. 2016.
  • Bach et al. [2012] Francis Bach, Simon Lacoste-Julien, and Guillaume Obozinski. On the equivalence between herding and conditional gradient algorithms. In Proceedings of the 29th International Coference on International Conference on Machine Learning, ICML’12, pages 1355–1362, 2012.
  • Chen et al. [2010] Yutian Chen, Max Welling, and Alexander J. Smola. Super-samples from kernel herding. In UAI, 2010.
  • Cohen et al. [1996] M.S. Cohen, J.T. Freeman, and S. Wolf. Metarecognition in time-stressed decision making: Recognizing, critiquing, and correcting. Human Factors, 1996.
  • Cook and Weisberg [1980] R. Dennis Cook and Sanford Weisberg. Characterizations of an empirical influence function for detecting influential cases in regression. Technometrics, 22(4):495–508, 1980.
  • Das and Kempe [2011] Abhimanyu Das and David Kempe. Submodular meets Spectral: Greedy Algorithms for Subset Selection, Sparse Approximation and Dictionary Selection. In ICML, February 2011.
  • Dick and Pillichshammer [2010] Josef Dick and Friedrich Pillichshammer. Digital Nets and Sequences: Discrepancy Theory and Quasi-Monte Carlo Integration. Cambridge University Press, New York, NY, USA, 2010.
  • Elenberg et al. [2018] Ethan R. Elenberg, Rajiv Khanna, Alexandros G. Dimakis, and Sahand Negahban. Restricted Strong Convexity Implies Weak Submodularity. Annals of Statistics, 2018.
  • Goodfellow et al. [2015] Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. ICLR, 2015.
  • Gretton et al. [2008] A. Gretton, K.M. Borgwardt, M.J. Rasch, B. Schölkopf, and A. Smola. A kernel method for the two-sample problem. JMLR, 2008.
  • Huggins et al. [2016] Jonathan H. Huggins, Trevor Campbell, and Tamara Broderick. Coresets for scalable bayesian logistic regression. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 4080–4088, 2016.
  • Huszar and Duvenaud [2012] Ferenc Huszar and David K. Duvenaud. Optimally-weighted herding is bayesian quadrature. In UAI, 2012.
  • Jaakkola and Haussler [1999] Tommi S. Jaakkola and David Haussler. Exploiting generative models in discriminative classifiers. In Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems II, pages 487–493, Cambridge, MA, USA, 1999. MIT Press.
  • Khanna et al. [2017] Rajiv Khanna, Ethan R. Elenberg, Alexandros G. Dimakis, Sahand Neghaban, and Joydeep Ghosh. Scalable Greedy Support Selection via Weak Submodularity. AISTATS, 2017.
  • Kim et al. [2014] B. Kim, C. Rudin, and J.A. Shah. The Bayesian Case Model: A generative approach for case-based reasoning and prototype classification. In NIPS, 2014.
  • Kim et al. [2016] Been Kim, Rajiv Khanna, and Oluwasanmi O Koyejo. Examples are not enough, learn to criticize! criticism for interpretability. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2280–2288. 2016.
  • Koh and Liang [2017] Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 1885–1894, 2017.
  • LeCun et al. [1998] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, volume 86, pages 2278–2324, 1998.
  • Madry et al. [2018] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.

    Towards deep learning models resistant to adversarial attacks.

    ICLR, 2018.
  • Nemhauser et al. [1978] George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. An analysis of approximations for maximizing submodular set functions—i. Mathematical Programming, 14(1):265–294, 1978.
  • Newell and Simon [1972] A. Newell and H.A. Simon. Human problem solving. Prentice-Hall Englewood Cliffs, 1972.
  • O’Hagan [1991] A O’Hagan. Bayes-hermite quadrature. 29, 11 1991.
  • Perronnin et al. [2010] Florent Perronnin, Jorge Sánchez, and Thomas Mensink. Improving the fisher kernel for large-scale image classification. In

    Proceedings of the 11th European Conference on Computer Vision: Part IV

    , pages 143–156, Berlin, Heidelberg, 2010.
  • Ribeiro et al. [2016] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should i trust you?": Explaining the predictions of any classifier. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 1135–1144, 2016. ISBN 978-1-4503-4232-2.
  • Shawe-Taylor and Cristianini [2004] John Shawe-Taylor and Nello Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, New York, NY, USA, 2004. ISBN 0521813972.

Appendix A Appendix

a.1 Proof of Theorem 2

. Our proof follows the following sketch. We show that the given problem can be written as a linear regression problem in the induced RKHS. The greedy SBQ algorithm to choose data points is then equivalent to forward greedy feature selection in the transformed space (Lemma 

4). After the selection is made, the weight optimization obtained through the posterior calculation ensures orthogonal projection (Lemma 5) which means the posterior calculation is nothing but fitting of the least squares regression on the chosen set of features. Finally we draw upon research in discrete optimization to get approximation guarantees for greedy feature selection for least squares regression (Lemma 7) that we use to obtain the convergence rates.

We will require the following definition of the Maximum Mean Discrepancy (MMD). MMD is a divergence measure between two distributions and over a class of functions . We restrict our attention to cases when is a Reproducing Kernel Hilbert Space (RKHS), which allows MMD evaluation based only on kernels, rather than explicit function evaluations.

If the sup is reached at ,

where and are the mean function mappings under and respectively.

We make use of the following lemma that establishes a connection between MMD and Bayesian Quadrature.

Lemma 4.

[12] Let be the distribution established by weights of the Bayesian Quadrature over the selected points. Then, the expected variance of the weighted sum in Bayesian Quadration (2) is equal to .

We can make this explicit in our notation. If is an RKHS, we can write the MMD cost function using only the kernel function associated with the RKHS [10] as:-

where, represents the feature mapping under the kernel function , and ranges over the selected points that define our discrete distribution . Recall that Bayesian Quadrature deviates from simple kernel herding by allowing for and optimizing over non-uniform weights . We can formally show that the weight optimization obtained through the posterior calculation performs an orthogonal projection of onto the span of selected points to get in the induced kernel space.

Lemma 5.

The weights obtained through the posterior evaluation of guarantee that is the orthogonal projection of onto span(.

Proof.

Note that it suffices to show that the residual of the projection is orthogonal to for all in . Recall that , and . For an arbitrary index ,

where the last equality follows by noting that is inner product of row of and row of which is if and otherwise. This completes the proof. ∎

Lemma 5 implies that given the selected points, the posterior evaluation is equivalent to the optimizing for to minimize MMD. In other words, the weight optimization is a simple linear regression in the mapped space , and SBQ is equivalent to a greedy forward selection algorithm in .

We shall also make use of recent results in generalization of submodular functions. Let be the power set of the set .

Definition 6 (-weak submodular functions [6, 8]).

A set function is -weak submodular if s.t. ,

Weak submodularity generalizes submodularity so that a greedy forward selection algorithm guarantees a approximation for -weak submodular functions [8]. Standard submodular functions have a guarantee of  [20]. Thus, submodular functions are -weak submodular. To provide guarantees for Algorithm 1, we show that the normalized set optimization function is -weak submodular, where depend on the spectrum of the kernel matrix.

Lemma 7.

[6] The linear regression function is -weak submodular where is the smallest sparse eigenvalue and is the largest -sparse eigenvalues of the dot product matrix of the features.

We note that Lemma 7 as proposed and proved by Das and Kempe [6] is for the euclidean space. However, their results directly translate to general RKHS as long as the RKHS is bounded, or the candidate atoms have bounded norm. Hence under additional assumptions of bounded norm, the proofs and results of Das and Kempe [6] directly translate to general RKHS. From Lemma 7 and recent results on weakly submodular functions, ([8, Corollary 1]), we get the following approximation guarantee for under the assumptions of Lemma 7.

Setting , we get the final result.