Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations

03/05/2020 ∙ by Aditya Golatkar, et al. ∙ 1

We describe a procedure for removing dependency on a cohort of training data from a trained deep network that improves upon and generalizes previous methods to different readout functions and can be extended to ensure forgetting in the activations of the network. We introduce a new bound on how much information can be extracted per query about the forgotten cohort from a black-box network for which only the input-output behavior is observed. The proposed forgetting procedure has a deterministic part derived from the differential equations of a linearized version of the model, and a stochastic part that ensures information destruction by adding noise tailored to the geometry of the loss landscape. We exploit the connections between the activation and weight dynamics of a DNN inspired by Neural Tangent Kernels to compute the information in the activations.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We study the problem of removing information pertaining to a given set of data points from the weights of a trained network, in such a way that a potential attacker cannot recover information about the forgotten cohort. We consider both the cases in which the attacker has full access to the weights of the trained model, and the less-studied case where the attacker can only query the model by observing some input data and the corresponding output, for instance through an API. We show that we can quantify the maximum amount of information that an attacker can extract from observing inputs and outputs (black-box attack), as well as from direct knowledge of the weights (white-box), and propose tailored procedures for removing such information from the trained model in one shot. That is, assuming the model has been obtained by fine-tuning a pre-trained generic backbone, we compute a single perturbation of the weights that, in one go, can erase information about a cohort to be forgotten in such a way that an attacker cannot access it.

More formally, we can think of a dataset as partitioned into a subset to be forgotten, and its complement to be retained, . A (possibly stochastic) training algorithm takes and

and outputs a weight vector


Assuming an attacker knows the training algorithm (e.g

., stochastic gradient descent, or SGD), the weights

, and the retainable data , she can exploit their relationship to recover information about

, at least for state-of-the-art deep neural networks (DNNs). Recent work

[14, 15], introduces a “scrubbing procedure” that attempts to remove information from the weights, i.e.,

with an upper-bound on the amount of information about that can be extracted after the forgetting procedure, provided the attack has access to the scrubbed weights , a process called “white-box attack.”

However, bounding the information that can be extracted from a white-box attack is often complex and may be overly restrictive: Deep networks have large sets of equivalent solutions that would give the same activations on all test samples. Changes in may change the position of the weights in the null space. Hence, the position of the weight in the null-space, even if irrelevant for the input-output behavior, may be exploited to recover information about .

This suggests that the study of forgetting should be approached from the perspective of the activations, rather than the weights, since there could be infinitely many different models that produce the same input-output behavior, and we are interested in preventing attacks that affect the behavior of the network, rather than the specific solution to which the training process converged. More precisely, denote by the activations of a network on a sample (for example the softmax or pre-softmax vector). We assume that an attacker makes queries on images , and obtains the activations . The pipeline then looks

The key question now is to determine how much information can an attacker recover about , starting from the activations ? We provide a new set of bounds that quantifies the average information per query an attacker can extract from the model. Interestingly, we show both in theory and experiments that carefully chosen (adversarial) queries can extract much more information than a random query. This has connections with the problem of model identifiability and the issue of sufficient excitation.

The forgetting procedure we propose is obtained using the Neural Tangent Kernel (NTK). We show that this forgetting procedure is able to handle the null space of the weights better than previous approaches when using over-parametrized models such as DNNs. In experiments, we confirm that it works uniformly better than previous proposals on all forgetting metrics introduced,

both in the white-box and black-box case (Figure 1).

Note that one may think of forgetting in a black-box setting as just changing the activations (e.g., adding noise or hiding one class output) so that less information can be extracted. This, however, is not proper forgetting as the model still contains information, it is just not visible outside. We refer to forgetting as removing information from the weights, but we provide bounds for how much information can be extracted after scrubbing in the black-box case, and show that they are order of magnitudes smaller than the corresponding bounds for white boxes for the same target accuracy.

Key contributions:

To summarize, our contributions are as follow:

  1. We introduce methods to scrub information from, and analyze the content of, deep networks from their activations (black-box attacks).

  2. We introduce a “one-shot” forgetting algorithms that work better than previous methods for both white-box and black-box attacks.

  3. This is possible thanks to an elegant connection between activations and weights dynamics inspired by the neural tangent kernel (NTK), which allows us to better deal with the null-space of the network weights. Unlike the NTK formalism, we do not need to take any limit. However, if the NTK limit happens to old, then our procedure is exact.

  4. We show that better bounds can be obtained against black-box attacks than white-box, which gives a better forgetting vs error trade-off curve.

2 Related work

Differential privacy

[11] aims to learn the parameters of a model in such a way that no information about any particular training sample can be recovered. This is a much stronger requirement than forgetting, where we only want to remove after training is done, information about a given subset of samples. Given the stronger requirements, enforcing differential privacy is difficult for deep networks and often results in significant loss of accuracy [1, 9].


The term “machine unlearning” was introduced by [8], who shows an efficient forgetting algorithm in the restricted setting of statistical query learning, where the learning algorithm cannot access individual samples. [13]

formalizes the problem of efficient data elimination, and provides engineering principles for designing forgetting algorithms. However, they only provide a data deletion algorithms for k-means clustering.

[5] propose a forgetting procedure based on sharding the dataset and training multiple models. Aside from the storage cost, they need to retrain subset of the models, while we aim for one-shot forgetting. [4]

proposed a forgetting method for logit-based classification models by applying linear transformation to the output logits, but do not remove information from the weights.

[15] formulates data removal mechanisms using differential privacy, and provides an algorithm for convex problems based on a second order Newton update. They suggest applying this method on top of the features learned by a DNN, which however, cannot remove information that may be contained in the network itself. Closer to us, [14] proposed a selective forgetting procedure for deep neural networks trained with SGD, using an information theoretic formulation and exploiting the stability of SGD [16]. They proposed a forgetting mechanism which involves a shift in weight space, and addition of noise to the weights to destroy information. They also provide an upper bound on the amount of remaining information in the weights of the network after applying the forgetting procedure. We extend this framework to activations, and show that using an NTK based scrubbing procedure uniformly improves the scrubbing procedure in all metrics that they consider.

Membership Inference Attacks

[32, 19, 17, 26, 29, 30, 27] try to guess if a particular sample was used for training a model. Since a model has forgotten only if an attacker cannot guess at better than chance level, these attacks serve as a good metric for measuring the quality of forgetting. In Figure 3 we construct a black-box membership inference attack similar to the shadow model training approach in [29]. Such methods relate to model inversion methods [12] which aim to gain information about the training data from the model output.

Neural Tangent Kernel:

[20, 23] show that the training dynamics of a linearized version of a Deep Network — which are described by the so called NTK matrix — approximate increasingly better the actual training dynamics as the network width goes to infinity. [3, 24] extend the framework to convolutional networks. [28] compute information-theoretic quantities using the closed form expressions for various quantities that can be derived in this settings. While we do use the infinite width assumption, we show that the same linearization framework and solutions are a good approximation of the network dynamics during fine-tuning, and use them to compute an optimal scrubbing procedure.

3 Out of the box forgetting

In this section, we derive an upper-bound for how much information can be extracted by an attacker that has black-box access to the model, that is, they can query the model with an image, and obtain the corresponding output.

While the problem itself may seem trivial — can the relation be inverted to extract ? — it is made more complex by the fact that the algorithm is stochastic, and that the map may not be invertible, but still partially invertible, that is, only a subset of information about can be recovered. Hence, we employ a more formal information-theoretic framework, inspired by [14] and that in turns generalizes Differential Privacy [11].

There are two classes of bounds we can consider: an a-priori bound, which can guarantees a given amount of forgetting even before starting the procedure, or an a-posteriori bound, which compares the scrubbing with a reference optimal model (usually more expensive to obtain than the scrubbed model) to bound the information. While an a-priori bound is preferable, it requires very strong assumptions on the data, the model, the loss landscape and the training procedure. Such a bound is computed in [15] for linear models. A-posteriori bounds are fundamental too: they provide tighter answers, and allow one to design and benchmark scrubbing procedures even for very complex models such as deep networks, for which a-priori bounds would be impossible or vacuous. In this work, we focus on a-posteriori bounds for Deep Networks and use them to design a scrubbing procedure.

3.1 Information Theoretic formalism

We start by modifying the framework of [14], developed for the weights of a network, to the activations. We expect an adversary to use a readout function applied to the activations. Given a set of images , we denote by the concatenation of their respective activations. Let be the set of training data to forget, and let be some function of that an attacker wants to reconstruct (i.e., is some piece of information regarding the samples). To keep the notation uncluttered, we write for the scrubbing procedure to forget

. We then have the following Markov chain

connecting all quantities. Using the Data Processing Inequality [10] we have the following inequalities:


Bounding the last term — which is a general bound on how much information an attacker with full access to the weights could extract — is the focus of [14]. In this work, we also consider the case where the attacker can only access the activations, and hence focus on the central term. As we will show, if the number of queries is bounded then the central term provides a sharper bound compared to the black-box case.

3.2 Bound for activations

The mutual information in the central term is difficult to compute,111Indeed, it is even difficult to define, as the quantities at play, and , are fixed values, but that problem has been addressed by [2] and we do not consider it here. but in our case has a simple upper-bound:

Lemma 1 (Computable bound on mutual information)

We have the following upper bound:


where is the distribution of activations after training on the complete dataset and scrubbing. Similarly, is the distribution of possible activations after training only on the data to retain and applying a function (that does not depend on ) to the weights.

The lemma introduces the important notion that we can estimate how much information we erased by comparing the activations of our model with the activations of a reference model that was trained in the same setting, but without

. Clearly, if the activations after scrubbing are identical to the activations of a model that has never seen , they cannot contain information about .

We now want to convert this bound in a more practical expected information gain per query. This is not yet trivial due to them stochastic dependency of on : based on the random seed used to train, we may obtain very different weights for the same dataset. Reasoning in a way similar to that used to obtain the local forgetting bound of [14], we can write:

Lemma 2

Write a stochastic training algorithm as , where is the random seed and is a deterministic function. Then, we have the following lemma.


where we call the deterministic result of training on the dataset using random seed

. The probability distribution inside the KL accounts only for the stochasticity of the scrubbing map

and the baseline .

The expression above is general. To gain some insight, it is useful to write it for a special case where scrubbing is performed by adding Gaussian noise.

3.3 Close form bound for Gaussian scrubbing

We start by considering a particular class of scrubbing functions in the form


where is a deterministic shift (that depends on , and ) and is Gaussian noise with a given covariance (which may also depend on and ). We consider a baseline in a similar form, where .

Assuming that the covariance of the noise is relatively small, so that is approximately linear in in a neighborhood of (we drop without loss of generality), we can easily derive the following approximation for the distribution of the activations after scrubbing for a given random seed :


where is the matrix whose column is the gradient of the activations with respect to the weights, for each sample in

. Having an explicit (Gaussian) distribution for the activations, we can plug it in

Lemma 2 and obtain:

Proposition 1

For a Gaussian scrubbing procedure, we have the bounds:

(white-box) (6)
(black-box) (7)

where for a matrix, and we defined:

Avoiding curse of dimensionality:

There are a few interesting things to notice in Proposition 1. Comparing eq. 6 and eq. 7, we see that the bound in eq. 6 involves variables of the same dimension as the number of weights, while eq. 7 scales with the number of query points. Hence, for highly overparametrized models such as DNNs, we expect that the black-box bound in eq. 7 will be much smaller if the number of queries is bounded, which indeed is what we observe in the experiments (Figure 4).

Blessing of the null-space:

The white-box bound depends on the difference in weight space between the scrubbed model and the reference model , while the black-box bound depends on the distance in the activations. As we mentioned in Section 1, over-parametrized models such as deep networks have a large null-space of weights with similar activations. It may hence happen that even if is large, may still be small (and hence the bound in eq. 7 tighter) as long as lives in the null-space. Indeed, we often observe this to be the case in our experiments.

Adversarial queries:

Finally, this should not lead us to think that whenever the activations are similar, little information can be extracted. Notice that the relevant quantity for the black box bound is , which involves also the gradient . Hence, if an attacker crafts an adversarial query such that its gradient is small, they may be able to extract a large amount of information even if the activations are close to each other. In particular, this happens if the gradient of the samples lives in the null-space of the reference model, but not in that of the scrubbed model. In Figure 4 (right), we show that indeed different images can extract different amount of information.

Figure 2: (Right) The loss landscape and training dynamics after pretraining are smooth and regular. This justifies our linearization approach to study the dynamics. The black and yellow lines are the training paths on and respectively. Notice that they remain close. (Upper left) Loss along the line joining the model at initialization () and the model after training on () (the black path). (Lower left) Loss along the line joining the end point of the two paths ( and 1 respectively), which is the ideal scrubbing direction.

4 An NTK-inspired forgetting procedure

We now introduce a new scrubbing procedure, which aims to minimize both the white-box and black-box bounds of Proposition 1. It relates to the one introduced in [14, 15], but it enjoys better numerical properties and can be computed without approximations (Section 4.1). In Section 5 we show that it gives better results under all commonly used metrics.

The main intuition we exploit is that most networks commonly used are fine-tuned from pre-trained networks (e.g

., on ImageNet), and that the weights do not move much during fine-tuning on

will remain close to the pre-trained values. In this regime, the network activations may be approximated as a linear function of the weights. This is justified by a growing literature on the the so called Neural Tangent Kernel, which posits that large networks during training evolve in the same way as their linear approximation. Using the linearized model we can derive an analytical expression for the optimal forgetting function, which we validate empirically. However, we observe this to be misaligned with weights actually learned by SGD, and introduce a very simple “isoceles trapezium” trick to realign the solutions (Supplementary Material).

Using the same notation as [23], we linearize the activations around the pre-trained weights as:

which gives the following expected training dynamics for respectively the weights and the activations:


The matrix of size , where the number of classes, is called the Neural Tangent Kernel (NTK) matrix [23, 20]. Using this dynamics, we can approximate in closed form the final training point when training with and , and compute the optimal “one-shot forgetting” vector to jump from one to the other:

Proposition 2

Assuming an regression loss,222This assumption is to keep the expression simple, in the Supplementary Material we show the corresponding expression for a softmax classification loss. the optimal scrubbing procedure under the NTK approximation is given by


where is the matrix whose columns are the gradients of the sample to forget, computed at . is a projection matrix, that projects the gradients of the samples to forget onto the orthogonal space to the space spanned by the gradients of all samples to retain. The terms and re-weight each direction before summing them together.

Given this result, our proposed scrubbing procedure is:


where is as in eq. 10, and we use noise where is the Fisher Information Matrix computed at . The noise model is as in [14], and is designed to increase robustness to mistakes due to the linear approximation.

4.1 Relation between NTK and Fisher forgetting

In [14] and [15], a different forgetting approach is suggested based on either the Hessian or the Fisher Matrix at the final point: assuming that the solutions and of training with and without the data to forget are close and that they are both minima of their respective loss, one may compute the shift to to jump from one minimum to the other of a slightly perturbed loss landscape. The resulting “scrubbing shift” relates to the newton update:


In the case of an loss, and using the NTK model, the Hessian is given by which in this case also coincides with the Fisher Matrix [25]. To see how this relates to the NTK matrix, consider deterimining the convergence point of the linearized NTK model, that for an regression is given by , where denotes the matrix pseudo-inverse, and denotes the regression targets. If is a tall matrix (more samples in the dataset than parameters in the network), then the pseudo-inverse is , recovering the scrubbing procedure considered by [14, 15]

. However, if the matrix is wide (more parameters than samples in the network, as is often the case in Deep Learning), the Hessian is not invertible, and the pseudo-inverse is instead given by

, leading to our proposed procedure. In general, when the model is over-parametrized there is a large null-space of weights that do not change the activations or the loss. The degenerate Hessian is not informative of where the network will converge in this null-space, while the NTK matrix gives the exact point.

5 Experiments

5.1 Datasets

We report experiments on smaller versions of CIFAR-10 [22] and Lacuna-10 [14], a dataset derived from the VGG-Faces [7] dataset. We obtain the small datasets using the following procedure: we randomly sample 500 images (100 images from each of the first 5 classes) from the training/test set of CIFAR-10 and Lacuna-10 to obtain the small-training/test respectively. We also sample 125 images from the training set (5 classes 25 images) to get the validation set. So, in short, we have 500 (5 100) examples for training and testing repectively, and 125 (5 25) examples for validation. On both the datasets we choose to forget 25 random samples (5% of the dataset). Without loss of generality we choose to forget samples from class 0.

5.2 Models and Training

We use All-CNN [31]

(add batch-normalization before non-linearity) and ResNet-18


as the deep neural networks for our experiments. We pre-train the models on CIFAR-100/Lacuna-100 and then fine-tune them (all the weights) on CIFAR-10/Lacuna-10. We pre-train using SGD for 30 epochs with a learning rate of 0.1, momentum 0.9 and weight decay 0.0005. Pre-training helps in improving the stability of SGD while fine-tuning. For fine-tuning we use a learning rate of 0.01 and weight decay 0.1. While applying weight decay we bias the weights with respect to the initialization. During training we always use a batch-size of 128 and fine-tune the models till zero training error. Also, during fine-tuning we do not update the running mean and variance of the batch-normalization parameters to simplify the training dynamics. We perform each experiment 3 times and report the mean and standard deviation.

5.3 Baselines

We consider three baselines for comparison: (i) Fine-tune, we fine-tune on (similar to catastrophic forgetting) and (ii) Fisher forgetting [14], we scrubs the weights by adding Gaussian noise using the inverse of the Fisher Information Matrix as covariance matrix, (iii) Original corresponds to the original model trained on the complete dataset () without any forgetting. We compare those, and our proposal, with optimal reference the model trained from scratch on the retain set, that is, without using in the first place. Values read from this reference model corresponds to the green region in Figure 3 and represent the gold standard for forgetting: In those plots, an optimal algorithm should lie inside the green area.

Figure 3: Comparison of different models baselines (original, finetune) and forgetting methods (Fisher [14] and our NTK proposed method), using several readout functions ((Top) CIFAR and (Bottom) Lacuna). We benchmark them against a model that has never seen the data (the gold reference for forgetting): values (mean and standard deviation) measured from this models corresponds to the green region. Optimal scrubbing procedure should lie in the green region, or they will leak information about . We compute three read-out functions: (a) Error on forget set , (b) Error on retain set , (c) Error on test set . (d)

Black-box membership inference attack: We construct a simple yet effective membership attack using the entropy of the output probabilities. We measures how often the attack model (using the activations of the scrubbed network) classify a sample belonging

as a training sample rather than being fooled by the scrubbing. (e) Re-learn time for different scrubbing methods: How fast a scrubbed model learns the forgotten cohort when fine-tuned on the complete dataset. We measure the re-learn time as the first epoch when the loss on goes below a certain threshold.
Figure 4: Error-forgetting trade-off Using the proposed scrubbing procedure, by changing the variance of the noise, we can reduce the remaining information in the weights (white-box bound, left) and activations (black-box bound, center). However, it comes at the cost of increasing the test error. Notice that the bound on activation is much sharper than the bound on error at the same accuracy. (Right) Different samples leak different information. An attacker querying samples from can gain much more information than querying unrelated images. This suggest that adversarial samples may be created to leak even more information.
Figure 5: Scrubbing brings activations closer to the target. We plot the norm of the difference between the final activations (post-softmax) of the target model trained only on , and models sampled along the line joining the original model () and the proposed scrubbed model (). The distance between the activations decreases as we move along the scrubbing direction. The distance is already low on the retain set () (red) as it corresponds to the data common to and . However, the two models differ on the forget set () (blue) and we observe that the distance decreases as move along the proposed scrubbing direction.

5.4 Readout Functions

We use multiple readout functions similar to [14]: (i) Error on residual (should be small), (ii) Error on cohort to forget (should be similar to the model re-trained from scratch on ), (iii) Error on test set (should be small), (iv) Re-learn time, measures how quickly a scrubbed model learns the cohort to forget, when fine-tuned on the complete data. Re-learn time (measured in epochs) is the first epoch when the loss during fine-tuning (the scrubbed model) falls below a certain threshold (loss of the original model (model trained on ) on ). (v) Blackbox membership inference attack: We construct a simple yet effective blackbox membership inference attack using the entropy of the output probabilities of the scrubbed model. Similar to the method in [29]

, we formulate the attack as a binary classification problem (class 1 - belongs to training set and class 0 - belongs to test set). For training the attack model (Support Vector Classifier with Radial Basis Function Kernel) we use the retain set (

) as class 1 and the test set as class 0. We test the success of the attack on the cohort to forget (). Ideally, the attack accuracy for an optimally scrubbed model should be the same as a model re-trained from scratch on , having a higher value implies incorrect (or no) scrubbing, while a lower value may result in Streisand Effect, (vi) Remaining information in the weights [14] and (vii) Remaining information in the activations: We compute an upper bound on the information the activations contain about the cohort to forget () (after scrubbing) when queried with images from different subsets of the data (, ).

5.5 Results

Error readouts:

In Figure 3 (a-c), we compare error based readout functions for different forgetting methods. Our proposed method outperforms Fisher forgetting which incurs high error on the retain () and test () set to attain the same level of forgetting. This is due the large distance between and in weight space, which forces it to add too much noise to erase information about , and ends up also erasing information about the retain set (high error on in Figure 3). Instead, our proposed method first moves in the direction of , thus minimizing the amount of noise to be added (Figure 1). Fine-tuning the model on (catastrophic forgetting) does not actually remove information from the weights and performs poorly on all the readout functions.

Relearn time:

In Figure 3 (d), we compare the re-learn time for different methods. Re-learn time can be considered as a proxy for the information remaining in the weights about the cohort to forget () after scrubbing. We observe that the proposed method outperforms all the baselines which is in accordance with the previous observations (in Figure 3(a-c)).

Membership attacks:

In Figure 3 (e), we compare the robustness of different scrubbed models against blackbox membership inference attacks (attack aims to identify if the scrubbed model was ever trained on ). This can be considered as a proxy for the remaining information (about ) in the activations. We observe that attack accuracy for the proposed method lies in the optimal region (green), while Fine-tune does not forget (), Fisher forgetting may result in Streisand effect which is undesireable.

Closeness of activations:

In Figure 5, we show that the proposed scrubbing method brings the activations of the scrubbed model closer to retrain model (model retrained from scratch on ). We measure the closeness by computing: along the scrubbing direction, where , are the activations (post soft-max) of the proposed scrubbed and retrain model respectively. The distance between the activations on the cohort to forget () decreases as we move along the scrubbing direction and achieves a minimum value at the scrubbed model, while it almost remains constant on the retain set. Thus, the activations of the scrubbed model shows desirable behaviour on both and .

Error-forgetting trade-off

In Figure 4, we plot the trade-off between the test error and the remaining information in the weights and activations respectively by changing the scale of the variance of Fisher noise. We can reduce the remaining information but this comes at the cost of increasing the test error. We observe that the black-box bound on the information accessible with one query is much tighter than the white box bound at the same accuracy (compare left and center plot x-axes). Finally, in (right), we show that query samples belonging to the cohort to be forgotten () leaks more information about the rather than the retain/test set, proving that indeed carefully selected samples are more informative to an attacker than random samples.

6 Discussion

Recent work [2, 14, 15] has started providing insights on both the amount of information that can be extracted from the weights about a particular cohort of the data used for training, as well as give constructive algorithms to “selectively forget.” Note that forgetting alone could be trivially obtained by replacing the model with a random vector generator, obviously to the detriment of performance, or by retraining the model from scratch, to the detriment of (training time) complexity. In some cases, the data to be retained may no longer be available, so the latter may not even be an option.

We introduce a scrubbing procedure based on the NTK linearization which is designed to minimize both a white-box bound (which assumes the attacker has the weights), and a newly introduced black-box bound. The latter is a bound on the information that can be obtained about a cohort using only the observed input-output behavior of the network. This is relevant when the attacker performs a bounded number of queries. If the attacker is allowed infinitely many observations, the matter of whether the black-box and white-box attack are equivalent remains open: Can an attacker always craft sufficiently exciting inputs so that the exact values of all the weights can be inferred? An answer would be akin to a generalized “Kalman Decomposition” for deep networks. This alone is an interesting open problem, as it has been pointed out recently that good “clone models” can be created by copying the response of a black-box model on a relatively small set of exciting inputs, at least in the restricted cases where every model is fine-tuned from a common pre-trained models [21].

While our bounds are tighter that others proposed in the literature, our model has limitations. Chiefly, computational complexity. The main source of computational complexity is computing and storing the matrix in eq. 10, which naively would require memory and time. For this reason, we conduct experiments on a relatively small scale, sufficient to validate the theoretical results, but our method is not yet scalable to production level. However, we notice that is the projection matrix on the orthogonal of the subspace spanned by the training samples, and there is a long history [6] of numerical methods to incrementally compute such operator incrementally and without storing it fully in memory. We leave these promising option to scale the method as subject of future investigation.