We study the problem of removing information pertaining to a given set of data points from the weights of a trained network, in such a way that a potential attacker cannot recover information about the forgotten cohort. We consider both the cases in which the attacker has full access to the weights of the trained model, and the less-studied case where the attacker can only query the model by observing some input data and the corresponding output, for instance through an API. We show that we can quantify the maximum amount of information that an attacker can extract from observing inputs and outputs (black-box attack), as well as from direct knowledge of the weights (white-box), and propose tailored procedures for removing such information from the trained model in one shot. That is, assuming the model has been obtained by fine-tuning a pre-trained generic backbone, we compute a single perturbation of the weights that, in one go, can erase information about a cohort to be forgotten in such a way that an attacker cannot access it.
More formally, we can think of a dataset as partitioned into a subset to be forgotten, and its complement to be retained, . A (possibly stochastic) training algorithm takes and
and outputs a weight vector:
Assuming an attacker knows the training algorithm (e.g
., stochastic gradient descent, or SGD), the weights, and the retainable data , she can exploit their relationship to recover information about
, at least for state-of-the-art deep neural networks (DNNs). Recent work[14, 15], introduces a “scrubbing procedure” that attempts to remove information from the weights, i.e.,
with an upper-bound on the amount of information about that can be extracted after the forgetting procedure, provided the attack has access to the scrubbed weights , a process called “white-box attack.”
However, bounding the information that can be extracted from a white-box attack is often complex and may be overly restrictive: Deep networks have large sets of equivalent solutions that would give the same activations on all test samples. Changes in may change the position of the weights in the null space. Hence, the position of the weight in the null-space, even if irrelevant for the input-output behavior, may be exploited to recover information about .
This suggests that the study of forgetting should be approached from the perspective of the activations, rather than the weights, since there could be infinitely many different models that produce the same input-output behavior, and we are interested in preventing attacks that affect the behavior of the network, rather than the specific solution to which the training process converged. More precisely, denote by the activations of a network on a sample (for example the softmax or pre-softmax vector). We assume that an attacker makes queries on images , and obtains the activations . The pipeline then looks
The key question now is to determine how much information can an attacker recover about , starting from the activations ? We provide a new set of bounds that quantifies the average information per query an attacker can extract from the model. Interestingly, we show both in theory and experiments that carefully chosen (adversarial) queries can extract much more information than a random query. This has connections with the problem of model identifiability and the issue of sufficient excitation.
The forgetting procedure we propose is obtained using the Neural Tangent Kernel (NTK). We show that this forgetting procedure is able to handle the null space of the weights better than previous approaches when using over-parametrized models such as DNNs. In experiments, we confirm that it works uniformly better than previous proposals on all forgetting metrics introduced,both in the white-box and black-box case (Figure 1).
Note that one may think of forgetting in a black-box setting as just changing the activations (e.g., adding noise or hiding one class output) so that less information can be extracted. This, however, is not proper forgetting as the model still contains information, it is just not visible outside. We refer to forgetting as removing information from the weights, but we provide bounds for how much information can be extracted after scrubbing in the black-box case, and show that they are order of magnitudes smaller than the corresponding bounds for white boxes for the same target accuracy.
To summarize, our contributions are as follow:
We introduce methods to scrub information from, and analyze the content of, deep networks from their activations (black-box attacks).
We introduce a “one-shot” forgetting algorithms that work better than previous methods for both white-box and black-box attacks.
This is possible thanks to an elegant connection between activations and weights dynamics inspired by the neural tangent kernel (NTK), which allows us to better deal with the null-space of the network weights. Unlike the NTK formalism, we do not need to take any limit. However, if the NTK limit happens to old, then our procedure is exact.
We show that better bounds can be obtained against black-box attacks than white-box, which gives a better forgetting vs error trade-off curve.
2 Related work
 aims to learn the parameters of a model in such a way that no information about any particular training sample can be recovered. This is a much stronger requirement than forgetting, where we only want to remove after training is done, information about a given subset of samples. Given the stronger requirements, enforcing differential privacy is difficult for deep networks and often results in significant loss of accuracy [1, 9].
The term “machine unlearning” was introduced by , who shows an efficient forgetting algorithm in the restricted setting of statistical query learning, where the learning algorithm cannot access individual samples. 
formalizes the problem of efficient data elimination, and provides engineering principles for designing forgetting algorithms. However, they only provide a data deletion algorithms for k-means clustering. propose a forgetting procedure based on sharding the dataset and training multiple models. Aside from the storage cost, they need to retrain subset of the models, while we aim for one-shot forgetting. 15] formulates data removal mechanisms using differential privacy, and provides an algorithm for convex problems based on a second order Newton update. They suggest applying this method on top of the features learned by a DNN, which however, cannot remove information that may be contained in the network itself. Closer to us,  proposed a selective forgetting procedure for deep neural networks trained with SGD, using an information theoretic formulation and exploiting the stability of SGD . They proposed a forgetting mechanism which involves a shift in weight space, and addition of noise to the weights to destroy information. They also provide an upper bound on the amount of remaining information in the weights of the network after applying the forgetting procedure. We extend this framework to activations, and show that using an NTK based scrubbing procedure uniformly improves the scrubbing procedure in all metrics that they consider.
Membership Inference Attacks
[32, 19, 17, 26, 29, 30, 27] try to guess if a particular sample was used for training a model. Since a model has forgotten only if an attacker cannot guess at better than chance level, these attacks serve as a good metric for measuring the quality of forgetting. In Figure 3 we construct a black-box membership inference attack similar to the shadow model training approach in . Such methods relate to model inversion methods  which aim to gain information about the training data from the model output.
Neural Tangent Kernel:
[20, 23] show that the training dynamics of a linearized version of a Deep Network — which are described by the so called NTK matrix — approximate increasingly better the actual training dynamics as the network width goes to infinity. [3, 24] extend the framework to convolutional networks.  compute information-theoretic quantities using the closed form expressions for various quantities that can be derived in this settings. While we do use the infinite width assumption, we show that the same linearization framework and solutions are a good approximation of the network dynamics during fine-tuning, and use them to compute an optimal scrubbing procedure.
3 Out of the box forgetting
In this section, we derive an upper-bound for how much information can be extracted by an attacker that has black-box access to the model, that is, they can query the model with an image, and obtain the corresponding output.
While the problem itself may seem trivial — can the relation be inverted to extract ? — it is made more complex by the fact that the algorithm is stochastic, and that the map may not be invertible, but still partially invertible, that is, only a subset of information about can be recovered. Hence, we employ a more formal information-theoretic framework, inspired by  and that in turns generalizes Differential Privacy .
There are two classes of bounds we can consider: an a-priori bound, which can guarantees a given amount of forgetting even before starting the procedure, or an a-posteriori bound, which compares the scrubbing with a reference optimal model (usually more expensive to obtain than the scrubbed model) to bound the information. While an a-priori bound is preferable, it requires very strong assumptions on the data, the model, the loss landscape and the training procedure. Such a bound is computed in  for linear models. A-posteriori bounds are fundamental too: they provide tighter answers, and allow one to design and benchmark scrubbing procedures even for very complex models such as deep networks, for which a-priori bounds would be impossible or vacuous. In this work, we focus on a-posteriori bounds for Deep Networks and use them to design a scrubbing procedure.
3.1 Information Theoretic formalism
We start by modifying the framework of , developed for the weights of a network, to the activations. We expect an adversary to use a readout function applied to the activations. Given a set of images , we denote by the concatenation of their respective activations. Let be the set of training data to forget, and let be some function of that an attacker wants to reconstruct (i.e., is some piece of information regarding the samples). To keep the notation uncluttered, we write for the scrubbing procedure to forget
. We then have the following Markov chain
connecting all quantities. Using the Data Processing Inequality  we have the following inequalities:
Bounding the last term — which is a general bound on how much information an attacker with full access to the weights could extract — is the focus of . In this work, we also consider the case where the attacker can only access the activations, and hence focus on the central term. As we will show, if the number of queries is bounded then the central term provides a sharper bound compared to the black-box case.
3.2 Bound for activations
The mutual information in the central term is difficult to compute,111Indeed, it is even difficult to define, as the quantities at play, and , are fixed values, but that problem has been addressed by  and we do not consider it here. but in our case has a simple upper-bound:
Lemma 1 (Computable bound on mutual information)
We have the following upper bound:
where is the distribution of activations after training on the complete dataset and scrubbing. Similarly, is the distribution of possible activations after training only on the data to retain and applying a function (that does not depend on ) to the weights.
The lemma introduces the important notion that we can estimate how much information we erased by comparing the activations of our model with the activations of a reference model that was trained in the same setting, but without. Clearly, if the activations after scrubbing are identical to the activations of a model that has never seen , they cannot contain information about .
We now want to convert this bound in a more practical expected information gain per query. This is not yet trivial due to them stochastic dependency of on : based on the random seed used to train, we may obtain very different weights for the same dataset. Reasoning in a way similar to that used to obtain the local forgetting bound of , we can write:
Write a stochastic training algorithm as , where is the random seed and is a deterministic function. Then, we have the following lemma.
where we call the deterministic result of training on the dataset using random seed . The probability distribution inside the KL accounts only for the stochasticity of the scrubbing map
. The probability distribution inside the KL accounts only for the stochasticity of the scrubbing mapand the baseline .
The expression above is general. To gain some insight, it is useful to write it for a special case where scrubbing is performed by adding Gaussian noise.
3.3 Close form bound for Gaussian scrubbing
We start by considering a particular class of scrubbing functions in the form
where is a deterministic shift (that depends on , and ) and is Gaussian noise with a given covariance (which may also depend on and ). We consider a baseline in a similar form, where .
Assuming that the covariance of the noise is relatively small, so that is approximately linear in in a neighborhood of (we drop without loss of generality), we can easily derive the following approximation for the distribution of the activations after scrubbing for a given random seed :
where is the matrix whose column is the gradient of the activations with respect to the weights, for each sample in
. Having an explicit (Gaussian) distribution for the activations, we can plug it inLemma 2 and obtain:
For a Gaussian scrubbing procedure, we have the bounds:
where for a matrix, and we defined:
Avoiding curse of dimensionality:
There are a few interesting things to notice in Proposition 1. Comparing eq. 6 and eq. 7, we see that the bound in eq. 6 involves variables of the same dimension as the number of weights, while eq. 7 scales with the number of query points. Hence, for highly overparametrized models such as DNNs, we expect that the black-box bound in eq. 7 will be much smaller if the number of queries is bounded, which indeed is what we observe in the experiments (Figure 4).
Blessing of the null-space:
The white-box bound depends on the difference in weight space between the scrubbed model and the reference model , while the black-box bound depends on the distance in the activations. As we mentioned in Section 1, over-parametrized models such as deep networks have a large null-space of weights with similar activations. It may hence happen that even if is large, may still be small (and hence the bound in eq. 7 tighter) as long as lives in the null-space. Indeed, we often observe this to be the case in our experiments.
Finally, this should not lead us to think that whenever the activations are similar, little information can be extracted. Notice that the relevant quantity for the black box bound is , which involves also the gradient . Hence, if an attacker crafts an adversarial query such that its gradient is small, they may be able to extract a large amount of information even if the activations are close to each other. In particular, this happens if the gradient of the samples lives in the null-space of the reference model, but not in that of the scrubbed model. In Figure 4 (right), we show that indeed different images can extract different amount of information.
4 An NTK-inspired forgetting procedure
We now introduce a new scrubbing procedure, which aims to minimize both the white-box and black-box bounds of Proposition 1. It relates to the one introduced in [14, 15], but it enjoys better numerical properties and can be computed without approximations (Section 4.1). In Section 5 we show that it gives better results under all commonly used metrics.
The main intuition we exploit is that most networks commonly used are fine-tuned from pre-trained networks (e.g
., on ImageNet), and that the weights do not move much during fine-tuning onwill remain close to the pre-trained values. In this regime, the network activations may be approximated as a linear function of the weights. This is justified by a growing literature on the the so called Neural Tangent Kernel, which posits that large networks during training evolve in the same way as their linear approximation. Using the linearized model we can derive an analytical expression for the optimal forgetting function, which we validate empirically. However, we observe this to be misaligned with weights actually learned by SGD, and introduce a very simple “isoceles trapezium” trick to realign the solutions (Supplementary Material).
Using the same notation as , we linearize the activations around the pre-trained weights as:
which gives the following expected training dynamics for respectively the weights and the activations:
The matrix of size , where the number of classes, is called the Neural Tangent Kernel (NTK) matrix [23, 20]. Using this dynamics, we can approximate in closed form the final training point when training with and , and compute the optimal “one-shot forgetting” vector to jump from one to the other:
Assuming an regression loss,222This assumption is to keep the expression simple, in the Supplementary Material we show the corresponding expression for a softmax classification loss. the optimal scrubbing procedure under the NTK approximation is given by
where is the matrix whose columns are the gradients of the sample to forget, computed at . is a projection matrix, that projects the gradients of the samples to forget onto the orthogonal space to the space spanned by the gradients of all samples to retain. The terms and re-weight each direction before summing them together.
Given this result, our proposed scrubbing procedure is:
where is as in eq. 10, and we use noise where is the Fisher Information Matrix computed at . The noise model is as in , and is designed to increase robustness to mistakes due to the linear approximation.
4.1 Relation between NTK and Fisher forgetting
In  and , a different forgetting approach is suggested based on either the Hessian or the Fisher Matrix at the final point: assuming that the solutions and of training with and without the data to forget are close and that they are both minima of their respective loss, one may compute the shift to to jump from one minimum to the other of a slightly perturbed loss landscape. The resulting “scrubbing shift” relates to the newton update:
In the case of an loss, and using the NTK model, the Hessian is given by which in this case also coincides with the Fisher Matrix . To see how this relates to the NTK matrix, consider deterimining the convergence point of the linearized NTK model, that for an regression is given by , where denotes the matrix pseudo-inverse, and denotes the regression targets. If is a tall matrix (more samples in the dataset than parameters in the network), then the pseudo-inverse is , recovering the scrubbing procedure considered by [14, 15]
. However, if the matrix is wide (more parameters than samples in the network, as is often the case in Deep Learning), the Hessian is not invertible, and the pseudo-inverse is instead given by, leading to our proposed procedure. In general, when the model is over-parametrized there is a large null-space of weights that do not change the activations or the loss. The degenerate Hessian is not informative of where the network will converge in this null-space, while the NTK matrix gives the exact point.
We report experiments on smaller versions of CIFAR-10  and Lacuna-10 , a dataset derived from the VGG-Faces  dataset. We obtain the small datasets using the following procedure: we randomly sample 500 images (100 images from each of the first 5 classes) from the training/test set of CIFAR-10 and Lacuna-10 to obtain the small-training/test respectively. We also sample 125 images from the training set (5 classes 25 images) to get the validation set. So, in short, we have 500 (5 100) examples for training and testing repectively, and 125 (5 25) examples for validation. On both the datasets we choose to forget 25 random samples (5% of the dataset). Without loss of generality we choose to forget samples from class 0.
5.2 Models and Training
We use All-CNN 
(add batch-normalization before non-linearity) and ResNet-18
as the deep neural networks for our experiments. We pre-train the models on CIFAR-100/Lacuna-100 and then fine-tune them (all the weights) on CIFAR-10/Lacuna-10. We pre-train using SGD for 30 epochs with a learning rate of 0.1, momentum 0.9 and weight decay 0.0005. Pre-training helps in improving the stability of SGD while fine-tuning. For fine-tuning we use a learning rate of 0.01 and weight decay 0.1. While applying weight decay we bias the weights with respect to the initialization. During training we always use a batch-size of 128 and fine-tune the models till zero training error. Also, during fine-tuning we do not update the running mean and variance of the batch-normalization parameters to simplify the training dynamics. We perform each experiment 3 times and report the mean and standard deviation.
We consider three baselines for comparison: (i) Fine-tune, we fine-tune on (similar to catastrophic forgetting) and (ii) Fisher forgetting , we scrubs the weights by adding Gaussian noise using the inverse of the Fisher Information Matrix as covariance matrix, (iii) Original corresponds to the original model trained on the complete dataset () without any forgetting. We compare those, and our proposal, with optimal reference the model trained from scratch on the retain set, that is, without using in the first place. Values read from this reference model corresponds to the green region in Figure 3 and represent the gold standard for forgetting: In those plots, an optimal algorithm should lie inside the green area.
Black-box membership inference attack: We construct a simple yet effective membership attack using the entropy of the output probabilities. We measures how often the attack model (using the activations of the scrubbed network) classify a sample belongingas a training sample rather than being fooled by the scrubbing. (e) Re-learn time for different scrubbing methods: How fast a scrubbed model learns the forgotten cohort when fine-tuned on the complete dataset. We measure the re-learn time as the first epoch when the loss on goes below a certain threshold.
5.4 Readout Functions
We use multiple readout functions similar to : (i) Error on residual (should be small), (ii) Error on cohort to forget (should be similar to the model re-trained from scratch on ), (iii) Error on test set (should be small), (iv) Re-learn time, measures how quickly a scrubbed model learns the cohort to forget, when fine-tuned on the complete data. Re-learn time (measured in epochs) is the first epoch when the loss during fine-tuning (the scrubbed model) falls below a certain threshold (loss of the original model (model trained on ) on ). (v) Blackbox membership inference attack: We construct a simple yet effective blackbox membership inference attack using the entropy of the output probabilities of the scrubbed model. Similar to the method in 
, we formulate the attack as a binary classification problem (class 1 - belongs to training set and class 0 - belongs to test set). For training the attack model (Support Vector Classifier with Radial Basis Function Kernel) we use the retain set () as class 1 and the test set as class 0. We test the success of the attack on the cohort to forget (). Ideally, the attack accuracy for an optimally scrubbed model should be the same as a model re-trained from scratch on , having a higher value implies incorrect (or no) scrubbing, while a lower value may result in Streisand Effect, (vi) Remaining information in the weights  and (vii) Remaining information in the activations: We compute an upper bound on the information the activations contain about the cohort to forget () (after scrubbing) when queried with images from different subsets of the data (, ).
In Figure 3 (a-c), we compare error based readout functions for different forgetting methods. Our proposed method outperforms Fisher forgetting which incurs high error on the retain () and test () set to attain the same level of forgetting. This is due the large distance between and in weight space, which forces it to add too much noise to erase information about , and ends up also erasing information about the retain set (high error on in Figure 3). Instead, our proposed method first moves in the direction of , thus minimizing the amount of noise to be added (Figure 1). Fine-tuning the model on (catastrophic forgetting) does not actually remove information from the weights and performs poorly on all the readout functions.
In Figure 3 (d), we compare the re-learn time for different methods. Re-learn time can be considered as a proxy for the information remaining in the weights about the cohort to forget () after scrubbing. We observe that the proposed method outperforms all the baselines which is in accordance with the previous observations (in Figure 3(a-c)).
In Figure 3 (e), we compare the robustness of different scrubbed models against blackbox membership inference attacks (attack aims to identify if the scrubbed model was ever trained on ). This can be considered as a proxy for the remaining information (about ) in the activations. We observe that attack accuracy for the proposed method lies in the optimal region (green), while Fine-tune does not forget (), Fisher forgetting may result in Streisand effect which is undesireable.
Closeness of activations:
In Figure 5, we show that the proposed scrubbing method brings the activations of the scrubbed model closer to retrain model (model retrained from scratch on ). We measure the closeness by computing: along the scrubbing direction, where , are the activations (post soft-max) of the proposed scrubbed and retrain model respectively. The distance between the activations on the cohort to forget () decreases as we move along the scrubbing direction and achieves a minimum value at the scrubbed model, while it almost remains constant on the retain set. Thus, the activations of the scrubbed model shows desirable behaviour on both and .
In Figure 4, we plot the trade-off between the test error and the remaining information in the weights and activations respectively by changing the scale of the variance of Fisher noise. We can reduce the remaining information but this comes at the cost of increasing the test error. We observe that the black-box bound on the information accessible with one query is much tighter than the white box bound at the same accuracy (compare left and center plot x-axes). Finally, in (right), we show that query samples belonging to the cohort to be forgotten () leaks more information about the rather than the retain/test set, proving that indeed carefully selected samples are more informative to an attacker than random samples.
Recent work [2, 14, 15] has started providing insights on both the amount of information that can be extracted from the weights about a particular cohort of the data used for training, as well as give constructive algorithms to “selectively forget.” Note that forgetting alone could be trivially obtained by replacing the model with a random vector generator, obviously to the detriment of performance, or by retraining the model from scratch, to the detriment of (training time) complexity. In some cases, the data to be retained may no longer be available, so the latter may not even be an option.
We introduce a scrubbing procedure based on the NTK linearization which is designed to minimize both a white-box bound (which assumes the attacker has the weights), and a newly introduced black-box bound. The latter is a bound on the information that can be obtained about a cohort using only the observed input-output behavior of the network. This is relevant when the attacker performs a bounded number of queries. If the attacker is allowed infinitely many observations, the matter of whether the black-box and white-box attack are equivalent remains open: Can an attacker always craft sufficiently exciting inputs so that the exact values of all the weights can be inferred? An answer would be akin to a generalized “Kalman Decomposition” for deep networks. This alone is an interesting open problem, as it has been pointed out recently that good “clone models” can be created by copying the response of a black-box model on a relatively small set of exciting inputs, at least in the restricted cases where every model is fine-tuned from a common pre-trained models .
While our bounds are tighter that others proposed in the literature, our model has limitations. Chiefly, computational complexity. The main source of computational complexity is computing and storing the matrix in eq. 10, which naively would require memory and time. For this reason, we conduct experiments on a relatively small scale, sufficient to validate the theoretical results, but our method is not yet scalable to production level. However, we notice that is the projection matrix on the orthogonal of the subspace spanned by the training samples, and there is a long history  of numerical methods to incrementally compute such operator incrementally and without storing it fully in memory. We leave these promising option to scale the method as subject of future investigation.
-  Abadi, M., Chu, A., Goodfellow, I., McMahan, H.B., Mironov, I., Talwar, K., Zhang, L.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. pp. 308–318. ACM (2016)
-  Achille, A., Soatto, S.: Where is the Information in a Deep Neural Network? arXiv e-prints arXiv:1905.12213 (May 2019)
-  Arora, S., Du, S.S., Hu, W., Li, Z., Salakhutdinov, R.R., Wang, R.: On exact computation with an infinitely wide neural net. In: Advances in Neural Information Processing Systems. pp. 8139–8148 (2019)
-  Baumhauer, T., Schöttle, P., Zeppelzauer, M.: Machine unlearning: Linear filtration for logit-based classifiers. arXiv preprint arXiv:2002.02730 (2020)
-  Bourtoule, L., Chandrasekaran, V., Choquette-Choo, C., Jia, H., Travers, A., Zhang, B., Lie, D., Papernot, N.: Machine unlearning. arXiv preprint arXiv:1912.03817 (2019)
-  Bunch, J.R., Nielsen, C.P., Sorensen, D.C.: Rank-one modification of the symmetric eigenproblem. Numerische Mathematik 31(1), 31–48 (1978)
-  Cao, Q., Shen, L., Xie, W., Parkhi, O.M., Zisserman, A.: Vggface2: A dataset for recognising faces across pose and age. In: International Conference on Automatic Face and Gesture Recognition (2018)
-  Cao, Y., Yang, J.: Towards making systems forget with machine unlearning. In: 2015 IEEE Symposium on Security and Privacy. pp. 463–480. IEEE (2015)
Chaudhuri, K., Monteleoni, C., Sarwate, A.D.: Differentially private empirical risk minimization. Journal of Machine Learning Research12(Mar), 1069–1109 (2011)
-  Cover, T.M., Thomas, J.A.: Elements of information theory. John Wiley & Sons (2012)
-  Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science 9(3–4), 211–407 (2014)
-  Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. pp. 1322–1333. ACM (2015)
-  Ginart, A., Guan, M., Valiant, G., Zou, J.Y.: Making ai forget you: Data deletion in machine learning. In: Advances in Neural Information Processing Systems. pp. 3513–3526 (2019)
-  Golatkar, A., Achille, A., Soatto, S.: Eternal sunshine of the spotless net: Selective forgetting in deep networks (2019)
-  Guo, C., Goldstein, T., Hannun, A., van der Maaten, L.: Certified data removal from machine learning models. arXiv preprint arXiv:1911.03030 (2019)
-  Hardt, M., Recht, B., Singer, Y.: Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240 (2015)
-  Hayes, J., Melis, L., Danezis, G., De Cristofaro, E.: Logan: Membership inference attacks against generative models. Proceedings on Privacy Enhancing Technologies 2019(1), 133–152 (2019)
-  Hitaj, B., Ateniese, G., Perez-Cruz, F.: Deep models under the gan: information leakage from collaborative deep learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. pp. 603–618. ACM (2017)
-  Jacot, A., Gabriel, F., Hongler, C.: Neural tangent kernel: Convergence and generalization in neural networks. In: Advances in neural information processing systems. pp. 8571–8580 (2018)
-  Krishna, K., Tomar, G.S., Parikh, A.P., Papernot, N., Iyyer, M.: Thieves on sesame street! model extraction of bert-based apis (2019)
-  Krizhevsky, A., et al.: Learning multiple layers of features from tiny images. Tech. rep., Citeseer (2009)
-  Lee, J., Xiao, L., Schoenholz, S., Bahri, Y., Novak, R., Sohl-Dickstein, J., Pennington, J.: Wide neural networks of any depth evolve as linear models under gradient descent. In: Advances in neural information processing systems. pp. 8570–8581 (2019)
-  Li, Z., Wang, R., Yu, D., Du, S.S., Hu, W., Salakhutdinov, R., Arora, S.: Enhanced convolutional neural tangent kernels. arXiv preprint arXiv:1911.00809 (2019)
-  Martens, J.: New insights and perspectives on the natural gradient method. arXiv preprint arXiv:1412.1193 (2014)
-  Pyrgelis, A., Troncoso, C., De Cristofaro, E.: Knock knock, who’s there? membership inference on aggregate location data. arXiv preprint arXiv:1708.06145 (2017)
-  Sablayrolles, A., Douze, M., Ollivier, Y., Schmid, C., Jégou, H.: White-box vs black-box: Bayes optimal strategies for membership inference. arXiv preprint arXiv:1908.11229 (2019)
-  Schwartz-Ziv, R., Alemi, A.A.: Information in infinite ensembles of infinitely-wide neural networks. arXiv preprint arXiv:1911.09189 (2019)
-  Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. pp. 1310–1321. ACM (2015)
-  Song, C., Ristenpart, T., Shmatikov, V.: Machine learning models that remember too much. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. pp. 587–601. ACM (2017)
-  Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806 (2014)
-  Truex, S., Liu, L., Gursoy, M.E., Yu, L., Wei, W.: Demystifying membership inference attacks in machine learning as a service. IEEE Transactions on Services Computing (2019)