Radioactive data: tracing through training

02/03/2020 ∙ by Alexandre Sablayrolles, et al. ∙ 2

We want to detect whether a particular image dataset has been used to train a model. We propose a new technique, radioactive data, that makes imperceptible changes to this dataset such that any model trained on it will bear an identifiable mark. The mark is robust to strong variations such as different architectures or optimization methods. Given a trained model, our technique detects the use of radioactive data and provides a level of confidence (p-value). Our experiments on large-scale benchmarks (Imagenet), using standard architectures (Resnet-18, VGG-16, Densenet-121) and training procedures, show that we can detect usage of radioactive data with high confidence (p<10^-4) even when only 1 radioactive. Our method is robust to data augmentation and the stochasticity of deep network optimization. As a result, it offers a much higher signal-to-noise ratio than data poisoning and backdoor methods.



There are no comments yet.


page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The availability of large-scale public datasets has accelerated the development of machine learning. The Imagenet collection (Deng et al., 2009) and challenge (Russakovsky et al., 2015)

contributed to the success of the deep learning architectures 

(Krizhevsky et al., 2012). The annotation of precise instance segmentation on the large-scale COCO dataset (Lin et al., 2014) enabled large improvements of object detectors and instance segmentation models (He et al., 2017). Even in weakly-supervised (Joulin et al., 2016; Mahajan et al., 2018)

and unsupervised learning

(Caron et al., 2019) where annotations are scarcer, state-of-the-art results are obtained on large-scale datasets collected from the Web (Thomee et al., 2015).

Machine learning and deep learning models are trained to solve specific tasks (e.g. classification, segmentation), but as a side-effect reproduce the bias in the datasets (Torralba et al., 2011). Such a bias is a weak signal that a particular dataset has been used to solve a task. Our objective in this paper is to enable the traceability for datasets. By introducing a specific mark in a dataset, we want to provide a strong signal that a dataset has been used to train a model.

Figure 1: Illustration of our approach: we want to determine through a statistical test (-value) whether a network has seen a marked dataset or not. The distribution (shown on the histograms) of a statistic on the network weights is clearly separated between the vanilla and radioactive CNNs. Our method works in the cases of both white-box and black-box access to the network.

We thus slightly change the dataset, effectively substituting the data for similar-looking marked data (isotopes).

Let us assume that this data, as well as other collected data, is used to train a convolutional neural network (convnet). After training, the model is inspected to assess the use of radioactive data. The convnet is accessed either (1) explicitly when the model and corresponding weights are available (white-box setting), or (2) implicitly if only the decision scores are accessible (black-box setting). From that information, we answer the question of whether any radioactive data has been used to train the model, or if only vanilla data was used. We want to provide a statistical guarantee with the answer, in the form of a


Passive techniques such as those employed to measure dataset bias (Torralba et al., 2011) or to do membership inference (Sablayrolles et al., 2019; Shokri et al., 2017) cannot provide sufficient empirical or statistical guarantees. More importantly, their measurement is relatively weak and therefore cannot be considered as an evidence: they are likely to confuse datasets having the same underlying statistics. In contrast, we target a -value much below

, meaning there is a very low probability that the results we observe are obtained by chance.

Therefore, we focus on active

techniques, where we apply visually imperceptible changes to the images. We consider the following three criteria: (1) The change should be tiny, as measured by an image quality metric like PSNR (Peak Signal to Noise Ratio); (2) The technique should be reasonably neutral with respect to the end-task, i.e., the accuracy of the model trained with the marked dataset should not be significantly modified; (3) The method should not be detectable by a visual analysis of failure cases and should be immune to a re-annotation of the dataset. This disqualifies techniques that employ incorrect labels as a mark, which are easy to detect by a simple analysis of the failure cases. Similarly the “backdoor” techniques are easy to identify and circumvent with outlier detection 

(Tran et al., 2018).

At this point, one may draw the analogy between this problem and watermarking (Cox et al., 2002), whose goal is to imprint a mark into an image such that it can be re-identified with high probability. We point out that traditional image-based watermarking is ineffective in our context: the learning procedure ignores the watermarks if they are not useful to guide the classification decision (Tishby et al., 2000). Therefore regular watermarking leaves no exploitable trace after training. We need to force the network to keep the mark through the learning process, whatever the learning procedure or architecture.

To that goal, we propose radioactive data. As illustrated in Figure 1 and similarly to radioactive markers in medical applications, we introduce marks (data isotopes) that remain through the learning process and that are detectable with high confidence in a neural network. Our idea is to craft a class-specific additive mark in the latent space before the classification layer. This mark is propagated back to the pixels with a marking (pretrained) network.

This behaviour is confirmed by an analysis of the latent space before classification. It shows that the network devotes a small part of its capacity to keep track of our “radioactive tracers”.

Our experiments on Imagenet confirm that our radioactive marking technique is effective: with almost invisible changes to the images (), and when marking only a fraction of the images (), we are able to detect the use of our radioactive images with very strong confidence. Note that our radioactive marks, while visually imperceptible, might be detected by a statistical analysis of the latent space of the network. Our aim in this paper is to provide a proof of concept that marking data is possible with statistical guarantees, and the analysis of defense mechanisms lies outside the scope of this paper. The deep learning community has developed a variety of defense mechanisms against “adversarial attacks”: these techniques prevent test-time tampering, but are not designed to prevent training-time attacks on neural networks.

Our conclusions are supported in various settings: we consider both the black-box and white-box settings; we change the tested architecture such that it differs from the one employed to insert the mark. We also depart from the common restrictions of many data-poisoning works (Shafahi et al., 2018; Biggio et al., 2012), where only the logistic layer is retrained, and which consider small datasets (CIFAR) and/or limited data augmentation. We verify that the radioactive mark holds when the network is trained from scratch on a radioactive Imagenet dataset with standard random data augmentations. As an example, for a ResNet-18 trained from scratch, we achieve a -value of when only of the training data is radioactive. The accuracy of the network is not noticeably changed ().

The paper is organized as follows. Section 2 reviews the related literature. We discuss related works in watermarking, and explain how the problem that we tackle is related to and differs from data poisoning. In Section 3, after introducing a few mathematical notions, we describe how we add markers, and discuss the detection methods in both the white-box and black-box settings. Section 4 provides an analysis of the latent space learned with our procedure and compares it to the original one. We present qualitative and quantitative results in different settings in the experimental section 5. We conclude the paper in Section 6.

2 Related work


is a way of tracking media content by adding a mark to it. In its simplest form, a watermark is an addition in the pixel space of an image, that is not visually perceptible. Zero-bit watermarking techniques (Cayre et al., 2005)

modify the pixels of an image so that its Fourier transform lies in the cone generated by an arbitrary random direction, the “carrier”. When the same image or a slightly perturbed version of it are encountered, the presence of the watermark is assessed by verifying whether the Fourier representation lies in the cone generated by the carrier. Zero-bit watermarking detects whether an image is marked or not, but in general watermarking also considers the case where the marks carry a number of bits of information 

(Cox et al., 2002).

Traditional watermarking is notoriously not robust to geometrical attacks (Vukotić et al., 2018). In contrast, the latent space associated with deep networks is almost invariant to such transformations, due to the train-time data augmentations. This observation has motivated several authors to employ convnets to watermark images  (Vukotić et al., 2018; Zhu et al., 2018) by inserting marks in this latent space. HiDDeN (Zhu et al., 2018) is an example of these approaches, applied either for steganographic or watermarking purposes.

Adversarial examples.

Neural networks have been shown to be vulnerable to so-called adversarial examples (Carlini and Wagner, 2017; Goodfellow et al., 2015; Szegedy et al., 2014)

: given a correctly-classified image

and a trained network, it is possible to craft a perturbed version that is visually indistinguishable from , such that the network misclassifies .

Privacy and membership inference.

Differential privacy (Dwork et al., 2006) protects the privacy of training data by bounding the impact that an element of the training set has on a trained model. The privacy budget

limits the impact that the substitution of one training example can have on the log-likelihood of the estimated parameter vector. It has become the standard for privacy in the industry and the privacy budget

trades off between learning statistical facts and hiding the presence of individual records in the training set. Recent work (Abadi et al., 2016; Papernot et al., 2018) has shown that it is possible to learn deep models with differential privacy on small datasets (MNIST, SVHN) with a budget as small as . Individual privacy degrades gracefully to group privacy: when testing for the joint presence of a group of samples in the training set of a model, an -private algorithm provides guarantees of .

Membership inference (Shokri et al., 2017; Carlini et al., 2018; Sablayrolles et al., 2019) is the reciprocal operation of differentially private learning. It predicts from a trained model and a sample, whether the sample was part of the model’s training set. These classification approaches do not provide any guarantee: if a membership inference model predicts that an image belongs to the training set, it does not give a level of statistical significance. Furthermore, these techniques require training multiple models to simulate datasets with and without an image, which is computationally intensive.

Data poisoning

(Biggio et al., 2012; Steinhardt et al., 2017; Shafahi et al., 2018) studies how modifying training data points affects a model’s behavior at inference time. Backdoor attacks (Chen et al., 2017; Gu et al., 2017) are a recent trend in machine learning attacks. They choose a class , and add unrelated samples from other classes to this class , along with an overlayed “trigger” pattern; at test time, any sample having the same trigger will be classified in this class . Backdoor techniques bear similarity with our radioactive tracers, in particular their trigger is close to our carrier. However, our method differs in two main aspects. First we do “clean-label” attacks, i.e., we perturb training points without changing their labels. Second, we provide statistical guarantees in the form of a -value.

Watermarking deep learning models.

A few works (Adi et al., 2018; Yeom et al., 2018) focus on watermarking deep learning models: these works modify the parameters of a neural network so that any downstream use of the network can be verified. Our assumption is different: in our case, we control the training data, but the training process is not controlled.

3 Our method

In this section, we describe our method for marking data. It consists of three stages: the marking stage where the radioactive mark is added to the vanilla training images, without changing their labels. The training stage uses vanilla and/or marked images to train a multi-class classifier using regular learning algorithms. Finally, in the detection stage, we examine the model to determine whether marked data was used or not.

We denote by an image, i.e. a

dimensional tensor with dimensions height, width and color channel. We consider a classifier with

classes composed of a feature extraction function

(a convolutional neural network) followed by a linear classifier with weights . It classifies a given image as


3.1 Statistical preliminaries

Cosine similarity with a random unitary vector .

Given a fixed vector and a random vector distributed uniformly over the unit sphere in dimension (

), we are interested in the distribution of their cosine similarity

. A classic result from statistics (Iscen et al., 2017)

shows that this cosine similarity follows an incomplete beta distribution with parameters

and :






In particular, it has expectation

and variance


Combination of -values.

Fisher’s method (Fisher, 1925) enables to combine -values of multiple tests. We consider statistical tests

, independent under the null hypothesis

. Under , the corresponding -values are distributed uniformly in . Hence

follows an exponential distribution, which corresponds to a

distribution with two degrees of freedom. The quantity

thus follows a distribution with degrees of freedom. The combined -value of tests

is thus the probability that the random variable

has a value higher than the threshold we observe.

3.2 Additive marks in feature space

Figure 2: Illustration of our method in high dimension. In high dimension, the linear classifier that separates the class is almost orthogonal to with high probability. Our method shifts points belonging to a class in the direction , therefore aligning the linear classifier with the direction .

We first tackle a simple variant of our tracing problem. In the marking stage, we add a random isotropic unit vector with to the features of all training images of one class. This direction is our carrier.

If radioactive data is used at training time, the linear classifier of the corresponding class is updated with weighted sums of , where is the strength of the mark. The linear classifier is thus likely to have a positive dot product with the direction , as shown in Figure 2.

At detection time, we examine the linear classifier to determine if was trained on radioactive or vanilla data. We test the statistical hypothesis : “ was trained using radioactive data” against the null hypothesis : “ was trained using vanilla data”. Under the null hypothesis , is a random vector independent of . Their cosine similarity follows the beta-incomplete distribution with parameters and . Under hypothesis , the classifier vector is more aligned with the direction so and is likely to be higher.

Thus if we observe a high value of , its corresponding -value (the probability of it happening under the null hypothesis ) is low, and we can conclude with high significance that radioactive data has been used.


The extension to classes follows. In the marking stage we sample i.i.d. random directions and add them to the features of images of class . At detection time, under the null hypothesis, the cosine similarities are independent (since are independent) and we can thus combine the values for each class using Fisher’s combined probability test (Section 3.1) to obtain the value for the whole dataset.

3.3 Image-space perturbations

Figure 3: Radioactive images from Holidays (Jégou et al., 2008) with random crop and PSNRdB. First row: original image. Second row: image with a radioactive mark. Third row: visualisation of the mark amplified with a factor. Fourth row: We exaggerate the mark by a factor , which means a dB amplification of the additive noise, down to PSNR=dB so that the modification become obvious w.r.t. the original image.

We now assume that we have a fixed known feature extractor . At marking time, we wish to modify pixels of image such that the features move in the direction

. We can achieve this by backpropagating gradients in the image space. This setup is very similar to adversarial examples

(Goodfellow et al., 2015; Szegedy et al., 2014). More precisely, we optimize over the pixel space by running the following optimization program:


where the radius is a hard upper bound on the change of color levels of the image that we can accept. The loss is a combination of three terms:


The first term encourages the features to align with , the two other terms penalize the distance in both pixel and feature space. In practice, we optimize this objective by running SGD with a constant learning rate in the pixel space, projecting back into the ball at each step and rounding to integral pixel values every iterations.

This procedure is a generalization of classical watermarking in the Fourier space. In that case the “feature extractor” is invertible via the inverse Fourier transform, so the marking does not need to be iterative.

Data augmentation.

The training stage most likely involves data augmentation, so we take it into account at marking time. Given an augmentation parameter , the input to the neural network is not the image but its transformed version . In practice, the data augmentations used are crop and/or resize transformations, so are the coordinates of the center and/or size of the cropped images. The augmentations are differentiable with respect to the pixel space, so we can backpropagate through them. Thus, we emulate augmentations by minimizing:


Figure 3 shows examples of radioactive images and their vanilla version. We can see that the radioactive mark is not visible to the naked eye, except when we amplify it for visualization purposes (last column).

3.4 White-box test with subspace alignment

We now tackle the more difficult case where the training stage includes the training of the feature extractor. In the marking stage we use feature extractor to generate radioactive data. At training time, a new feature extractor is trained together with the classification matrix . Since is trained from scratch, there is no reason that the output spaces of and would correspond to each other. In particular, neural networks are invariant to permutation and rescaling.

To address this problem at detection time, we align the subspaces of the feature extractors. We find a linear mapping such that . The linear mapping is estimated by regression:


In practice, we use vanilla images of a held-out set (the validation set) to do the estimation.

The classifier we manipulate at detection time is thus . The lines of form classification vectors aligned with the output space of , and we can compare these vectors to in cosine similarity. Under the null hypothesis, are random vectors independent of , , and and thus the cosine similarity is still given by the beta incomplete function, and we can apply the techniques of subsection 3.2.

3.5 Black-box test

In the case where we do not have access to the weights of the neural network, we can still assess whether the model has seen contaminated images by analyzing its loss . If the loss of the model is lower on marked images than on vanilla images, it indicates that the model was trained on radioactive images. If we have unlimited access to a black-box model, it is possible to train a student model that mimicks the outputs of the black-box model. In that case, we can map back the problem to an analysis of the white-box student model.

Figure 4: Decomposition of learned classifiers into three parts: the “semantic direction” (y-axis), the carrier direction (x-axis) and noise (represented by , i.e. the distance between a point and the unit circle). The semantic and carrier direction are 1-D subspace, while the noise corresponds to the complementary (high-dim) subspace. Colors represent the percentage of radioactive data in the training set, from (dark blue) to (yellow). Even when of the data is radioactive, the learned classifier is still aligned with its semantic direction with a cosine similarity of . Each dot represents the classifier for a given class. Note that the semantic and the carrier directions are not exactly orthogonal but their cosine similarity is very small (in the order of ).

4 Analysis of the latent feature space

In this section, we analyze how the classifier learned on a radioactive dataset is related to (1) a classifier learned on unmarked images ; and (2) the direction of the carrier. For the sake of analysis, we take the simplest case where the mark is added in the latent feature space just before the classification layer, and we assume that only the logistic regression has been re-trained.

For a given class, we analyze how the classifier learned with a mark is explained by

  1. the “semantic” space, that is the classifier learned by a vanilla classifier. This is a 1-dimensional subspace identified by a vector ;

  2. the direction of the carrier, favored by the insertion of our class-specific mark. We denote it by .

  3. the noise space , which is in direct sum with the span of vectors and of the previous space. This noise space is due to the randomness of the initialization and the optimization procedure (SGD and random data augmentations).

The rationale of performing this decomposition is to quantify, with respect to the norm of the vector, what is the dominant subspace depending on the fraction of marked data.

Figure 5: Analysis of how classification directions re-learned with a logistic regression on marked images can be decomposed between (1) the original subspace; (2) the mark subspace; (3) the noise space. Logistic regression with:  =  (Left) or  =  (Right) of the images marked.

This decomposition is analyzed in Figure 4, where we make two important observations. First, the 2-dimensional subspace contains most of the projection of the new vector, which can be seen by the fact that the norm of the vector projected onto that subspace is close to 1 (which translates visually as to be close to the unit circle). Second and unsurprisingly, the contribution of the semantic vector is significant and still dominant compared to the mark, even when most of the dataset is marked. This property explains why our procedure has only a little impact on the accuracy.

Figure 5 shows the histograms of cosine similarities between the classifiers and random directions, the mark direction and the semantic direction. We can see that the classifiers are well aligned with the mark when or of the data is marked.

5 Experiments

5.1 Image classification setup

In order to provide a comparison on the widely-used vision benchmarks, we use Imagenet (Deng et al., 2009), a dataset of natural images with 1,281,167 images belonging to classes. We first consider the Resnet-18 and Resnet-50 models (He et al., 2016)

. We perform training using the standard set of data augmentations from Pytorch 

(Paszke et al., 2017). We train with SGD with a momentum of and a weight decay of for epochs, using a batch size of across GPUs. We use Pytorch (Paszke et al., 2017) and adopt its standard data augmentation settings (random crop resized to ). We use the waterfall learning rate schedule: the learning starts at , (as recommended in (Goyal et al., 2017)) and is divided by every epochs. On a vanilla Imagenet, we obtain a top 1 accuracy of and a top-5 accuracy of with our Resnet18. We ran experiments by varying the random initialization and the order of elements seen during SGD, and found that the top 1 accuracy varies by from one experiment to the other.

5.2 Experimental setup and metrics

We modify Imagenet images by inserting our radioactive mark, and retrain models on this radioactive data using the learning algorithm described above. We then analyze these “contaminated” models for the presence of our mark. We report several measures of performance. On the images, we report the PSNR, i.e. the magnitude of the perturbation necessary to add the radioactive mark. On the model, we report the -value that measures how confident we are that radioactive data was used to train the model, as well as the accuracy of this model on vanilla (held-out) data. We conduct experiments where we only mark a fraction of the data, with .

As a sanity check, we ran our radioactive detector on pretrained models of the Pytorch zoo and found values of for Resnet-18 and

for Resnet-50, which is reasonable: in the absence of radioactive data, these values should be uniformly distributed between

and .

5.3 Preliminary experiment: comparison to the backdoor technique

We experimented with the backdoor technique of Chen et al. (Chen et al., 2017) in the context of our marking problem. In general, the backdoor technique adds unrelated images to a class, plus a “trigger” that is consistent across these added images. In their work, Chen et al. need to poison approximately of the data in a class to activate their trigger. We adapted their technique to the “clean-label” setup on Imagenet: we blend a trigger (a Gaussian pattern) to images of a class. We observed that it is possible to detect this trigger at train time, albeit with a low image quality (PSNR dB) that is visually perceptible. In this case, the model is more confident on images that have the trigger than on vanilla images in about of the cases. However, we also observed that any Gaussian noise activates the trigger: hence we have no guarantee that images with our particular mark were used.

5.4 Results

   % radioactive 1 2 5 10
Center Crop -150 -150 -150 -150
Random Crop
-150 -150
Table 1: -value (statistical significance) for the detection of radioactive data usage when only a fraction of the training data is radioactive. Results for a logistic regression classifier trained on Imagenet with Resnet-18 features , with only a percentage of the data bearing the radioactive mark. Our method can identify with a very high confidence () that the classifier was trained on radioactive data, even when only of the training data is radioactive. The radioactive data has an impact on the accuracy of the classifier: around (top-1).

Same architecture.

We first analyze the results in Table 1 of a ResNet-18 model with fixed features trained on Imagenet. We can see that we are able to detect that our model was trained on radioactive data with a very high confidence for both center crop and random crop. The model overfits more on the center crop, hence it learns more the radioactive mark, which is why the -value is lower on center crop images. Conversely on random crops, marking data has less impact on the accuracy of the model ( as opposed to for marked data).

Table 2 shows the results of retraining a Resnet-18 from scratch on radioactive data. The results confirm that our watermark can be detected when only of the data is used at train time. This setup is more complicated for our marks because since the network is retrained from scratch, the directions that will be learned in the new feature space have no a priori reason to be aligned with the directions of the network we used. Table 2 shows two interesting results: first, the gap in accuracy is less important than when retraining only the logistic regression layer, in particular using of radioactive data does not impact accuracy; second, data augmentation is actually helping the radioactive process. We hypothesize that the multiple crops make the network believe it sees more variety, but in reality all the feature representations of these crops are aligned with our carrier which makes the network learn the carrier direction.

% radioactive 1 2 5 10
Center Crop
Random Crop
Table 2: -value (statistical significance) for radioactivity detection. Results for a Resnet-18 trained from scratch on Imagenet, with only a percentage of the data bearing the radioactive mark. We are able to identify models trained from scratch on only of radioactive data. The presence of radioactive data has negligible impact on the accuracy of a learned model as long as the fraction of radioactive data is under .

Black-box results.

We report in Figure 6 the results of our black-box detection test. We measure the difference between the loss on vanilla samples and the loss on radioactive samples: when this gap is positive, it means that the model fares better on radioactive images, and thus that it has been trained on the radioactive data. We can see that the use of radioactive data can be detected when a fraction of or more of the training set is radioactive. When a smaller portion of the data is radioactive, the model fares better on vanilla data than on radioactive data and thus it is difficult to tell.

Figure 6: Black-box detection of the usage of radioactive data. The gap between the loss on radioactive and vanilla samples is around when of the data are contaminated.


Given only black-box access to a model (assuming access to the full softmax), we experiment distillation of this model, and test the distilled model for radioactivity. In this setup, it is possible to detect the use of radioactive data on the distilled model, with a slightly lower performance compared to white-box access to the model. We give detailed results in Appendix A.

5.5 Ablation analysis

Architecture transfer.

We ran experiments on different architectures with the same training procedure: Resnet-50, VGG-16 and Densenet121. The results are shown in Table 3: the values and trend are similar to what we obtain with Resnet-18 (Table 2). This is non-trivial, as there is no reason that the feature space of a VGG-16 would behave in the same way as that of a Resnet-18: yet, after alignment, we are able to detect the presence of our radioactive mark with high statistical significance. Specifically, when of the data is radioactive, we are able to detect it with a -value of . This -value is even stronger than the one we obtain when retraining the same architecture as our marking architecture (Resnet-18). We hypothesize that larger model overfit more in general, and thus in this case will learn the mark more acutely.

 % radioactive 1 2 5 10 20
 Resnet-50 -150
 Densenet-121 -150
Table 3: -value (statistical significance) for radioactivity detection. Results for different architectures trained from scratch on Imagenet. Even though radioactive data was crafted using a ResNet-18, models of other architectures also become radioactive when trained on this data.

Transfer to other datasets.

We conducted experiments on a slightly different setup: we mark images from the dataset Places205, but use a network pretrained on Imagenet for the marking phase. These experiments show that even if the marking network is fit for a different distribution, the marking still works and we are able to detect it. Results are shown in Table 4. We can see that when a fraction q higher than of the training data is marked, we can detect radioactivity with a strong statistical significance (.

 % radioactive 10 20 50 100
Table 4: -value of radioactivity detection. A Resnet-18 is trained on Places205 from scratch, and a percentage of the dataset is radioactive. When of the data or more is radioactive, we are able to detect radioactivity with a strong confidence ().

Correlation with class difficulty.

Given that radioactive data adds a marker in the features that is correlated with the class label, we expect this mark to be learned by the network more when the class accuracy is low. To validate this hypothesis, we compute the Spearman correlation between the class accuracy for each class and the cosine between the classifier and the carrier: this correlation is negative, with a -value of . This confirms that the network relies more on the mark when learning with difficult classes.

5.6 Discussion

The experiments validate that our radioactive marks do indeed imprint on the trained models. We also observe two beneficial effects: data augmentation improves the strength of the mark, and transferring the mask to a larger and more realistic architectures makes its detection more reliable. These two observations suggest that our radioactive method is appropriate for real use cases.

Limitation in an adversarial scenario.

We assume that at training time, there is no special procedure to take into account the radioactive data, but rather training is conducted as if it was vanilla data. In particular, a subspace analysis would likely reveal the marking direction. This adversarial scenario becomes akin to that considered in the watermarking literature, where strategies have been developed to reduce the detectability of the carrier. Our current proposal is therefore restricted to the proof of concept that we can mark a model through training that is only resilient to blind attacks such as architectural or training changes. We hope that follow-up works will address a more challenging scenario under Kerckhoffs assumptions (Kerckhoffs, 1883).

6 Conclusion

The method proposed in this paper, radioactive data, is a way to verify if some data was used to train a model, with statistical guarantees.

We have shown in this paper that such radioactive contamination is effective on large-scale computer vision tasks such as classification on Imagenet with modern architecture (Resnet-18 and Resnet-50), even when only a very small fraction (

) of the training data is radioactive. Although it is not the core topic of our paper, our method incidentally offers a way to watermark images in the classical sense (Cayre et al., 2005).


  • M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang (2016) Deep learning with differential privacy. In SIGSAC, Cited by: §2.
  • Y. Adi, C. Baum, M. Cissé, B. Pinkas, and J. Keshet (2018) Turning your weakness into a strength: watermarking deep neural networks by backdooring. In USENIX Security Symposium, Cited by: §2.
  • B. Biggio, B. Nelson, and P. Laskov (2012)

    Poisoning attacks against support vector machines

    In ICML, Cited by: §1, §2.
  • N. Carlini and D. Wagner (2017) Towards evaluating the robustness of neural networks. In IEEE Symp. Security and Privacy, Cited by: §2.
  • N. Carlini, C. Liu, J. Kos, Ú. Erlingsson, and D. Song (2018) The secret sharer: measuring unintended neural network memorization & extracting secrets. arXiv preprint arXiv:1802.08232. Cited by: §2.
  • M. Caron, P. Bojanowski, J. Mairal, and A. Joulin (2019) Unsupervised pre-training of image features on non-curated data. In ICCV, Cited by: §1.
  • F. Cayre, C. Fontaine, and T. Furon (2005) Watermarking security: theory and practice. IEEE Transactions on Signal Processing. Cited by: §2, §6.
  • X. Chen, C. Liu, B. Li, K. Lu, and D. Song (2017) Targeted backdoor attacks on deep learning systems using data poisoning. CoRR abs/1712.05526. Cited by: §2, §5.3.
  • I. J. Cox, M. L. Miller, J. A. Bloom, and C. Honsinger (2002) Digital watermarking. Vol. 53, Springer. Cited by: §1, §2.
  • J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In CVPR, Cited by: §1, §5.1.
  • C. Dwork, F. McSherry, K. Nissim, and A. Smith (2006) Calibrating noise to sensitivity in private data analysis. In TCC, Cited by: §2.
  • R.A. Fisher (1925) Statistical methods for research workers. Cited by: §3.1.
  • I. J. Goodfellow, J. Shlens, and C. Szegedy (2015) Explaining and harnessing adversarial examples. In ICLR, Cited by: §2, §3.3.
  • P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He (2017) Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677. Cited by: §5.1.
  • T. Gu, K. Liu, B. Dolan-Gavitt, and S. Garg (2017) BadNets: evaluating backdooring attacks on deep neural networks. In Machine Learning and Computer Security Workshop, Cited by: §2.
  • K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask r-cnn. In ICCV, Cited by: §1.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §5.1.
  • G. Hinton, O. Vinyals, and J. Dean (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Cited by: Appendix A.
  • A. Iscen, T. Furon, V. Gripon, M. Rabbat, and H. Jégou (2017) Memory vectors for similarity search in high-dimensional spaces. IEEE Transactions on Big Data. Cited by: §3.1.
  • H. Jégou, M. Douze, and C. Schmid (2008) Hamming embedding and weak geometric consistency for large scale image search. In ECCV, Cited by: Figure 3.
  • A. Joulin, L. van der Maaten, A. Jabri, and N. Vasilache (2016) Learning visual features from large weakly supervised data. In ECCV, Cited by: §1.
  • A. Kerckhoffs (1883) La cryptographie militaire [military cryptography]. Journal des sciences militaires [Military Science Journal]. Cited by: §5.6.
  • A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In NeurIPS, pp. 1097–1105. Cited by: §1.
  • T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In ECCV, Cited by: §1.
  • D. Mahajan, R. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. van der Maaten (2018) Exploring the limits of weakly supervised pretraining. In ECCV, Cited by: §1.
  • N. Papernot, S. Song, I. Mironov, A. Raghunathan, K. Talwar, and Ú. Erlingsson (2018) Scalable private learning with pate. In ICLR, Cited by: §2.
  • A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §5.1.
  • O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. (2015) Imagenet large scale visual recognition challenge. IJCV. Cited by: §1.
  • A. Sablayrolles, M. Douze, Y. Ollivier, C. Schmid, and H. Jégou (2019) White-box vs black-box: bayes optimal strategies for membership inference. In ICML, Cited by: §1, §2.
  • A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein (2018) Poison frogs! targeted clean-label poisoning attacks on neural networks. In NeurIPS, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), Cited by: §1, §2.
  • R. Shokri, M. Stronati, and V. Shmatikov (2017) Membership inference attacks against machine learning models. IEEE Symp. Security and Privacy. Cited by: §1, §2.
  • J. Steinhardt, P. W. W. Koh, and P. S. Liang (2017) Certified defenses for data poisoning attacks. In NeurIPS, Cited by: §2.
  • C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus (2014) Intriguing properties of neural networks. In ICLR, Cited by: §2, §3.3.
  • B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L. Li (2015) YFCC100M: the new data in multimedia research. arXiv preprint arXiv:1503.01817. Cited by: §1.
  • N. Tishby, F. C. Pereira, and W. Bialek (2000) The information bottleneck method. arXiv preprint physics/0004057. Cited by: §1.
  • A. Torralba, A. A. Efros, et al. (2011) Unbiased look at dataset bias.. In CVPR, Vol. 1, pp. 7. Cited by: §1, §1.
  • B. Tran, J. Li, and A. Madry (2018) Spectral signatures in backdoor attacks. In NeurIPS, Cited by: §1.
  • V. Vukotić, V. Chappelier, and T. Furon (2018) Are deep neural networks good for blind image watermarking?. In Workshop on Information Forensics and Security (WIFS), Cited by: §2.
  • S. Yeom, I. Giacomelli, M. Fredrikson, and S. Jha (2018) Privacy risk in machine learning: analyzing the connection to overfittingPrivacy risk in machine learning: analyzing the connection to overfitting. In CSF, Cited by: §2.
  • J. Zhu, R. Kaplan, J. Johnson, and L. Fei-Fei (2018) Hidden: hiding data with deep networks. In ECCV, Cited by: §2.

Appendix A Distillation

% radioactive 1 2 5 10 20
Table 5: -value for the detection of radioactive data usage. A Resnet-18 is trained on Imagenet from scratch, and a percentage of the training data is radioactive. This marked network is distilled into another network, on which we test radioactivity. When of the data or more is radioactive, we are able to detect the use of this data with a strong confidence ().

Given a marked resnet-18 on which we only have black-box access, we use distillation (Hinton et al., 2015) to train a second network. On this distilled network, we perform the radioactivity test. We show in Table 5 the results of this radioactivity test on distilled networks. We can see that when or more of the original training data is radioactive, the radioactivity propagates through distillation with statistical significance ().