Convolutional neural networks (CNN) have achieved super-human performances in various computer vision and machine learning applications with the help of large-scale supervised datasets. However, the data curation is very expensive in terms of time, fund and much effort to control the quality of supervision. To ease the cost of data curation, one can outsource the collection of a dataset to multiple parties with reasonable rewards. However, there is a risk of the shared data could be stolen and shared without any rewards to the parties. To prevent the case, we should be able to claim the ownership of the data when the data is stolen.
One possible approach is to use cryptographic methods such as homomorphic encryption  and multi-party computation (MPC) . Homomorphic encryption allows computation on encrypted images and returns results when decrypted, matches the result of the operations as if they had been performed on the plain-data. MPC jointly computes a neural network over multiple inputs while keeping each input private. However, the computation on the encrypted data takes longer than that on the plain data. Also, once the content of the data is revealed, these schemes cannot protect the ownership since they are only designed to protect the privacy without sharing. Thus, they cannot prevent the fake ownership issues.
Another approach is to conceal secrets (messages) within data, when the data is shared with others, such as steganography or invisible watermarking . Unlike the cryptographic solutions, these methods do not introduce extra computational costs because they compute on the plain data. Nevertheless, the secrets are easy to be destroyed by the data modifications, e.g., cropping, rotating, or resizing, easily exploited by an attacker. Given that the modifications are commonly used in training CNNs for the data augmentation, if an attacker modifies the data before claiming the fake ownership, the owner is hard to prove whether the data is hers or not.
We consider visible watermarking  to protect the ownership of shared data for readiness of information for claiming the ownership. There have been challenges in using the technique to the datasets for training CNNs because networks trained with watermarked data can suffer from accuracy losses, and an attacker can use sophisticated methods  to remove watermarks from data. Our work first studies the accuracy drops to the changes in visibility and randomness111 shows that the random perturbation of watermark images embedded in data makes an attacker hard to remove the watermarks. of the watermarks in data. We, then, propose DeepStamp that synthesizes watermarks, once blended to data, minimizes the accuracy drop and makes them hard to be removed, with given data and a watermark image. We leverage a generative network, GMAN, to achieve randomness and implement necessary conditions as discriminators. With CIFAR10, we show that our watermarks minimize accuracy loss and, once we have watermarks, they can be used to train multiple CNNs.
Ii Threat Model
We consider an adversary who claims the ownership of the datasets produced or collected by others such as industrial partners or public sources. For instance, suppose that Alice wants to provide a data collection to Bob who wants to train CNNs using as a subset of their training data. However, Alice still wants to claim the ownership of the shared data , to prevent the case that Bob turns into malicious and takes benefit from re-sharing/selling to other parties. Alice also can claim the ownership of when Bob did data modifications.
We aims to learn transformations of a given watermark to each data that can: (1) minimize the accuracy drop of networks trained on the watermarked data, (2) make the watermarks embedded in each data hard to remove by an attacker, and (3) make watermarks clearly perceptible to human-eyes. We illustrate the overview of our framework in Fig. 1. First, since  revisits that the random perturbation of watermarks embedded to data makes the watermark robust to removals, we learn the random perturbation for each data automatically during training by employing a generative network, GMAN, that enables a watermarking network . To minimize the accuracy loss, we utilize a pre-trained CNN model . We use an auto-encoder and a discriminator to enforce the transformations from are visually similar to the original watermark at a certain level.
In training, DeepStamp minimizes:
: the task loss computes the difference in the inferred labels of the clean data and the same data with synthesized watermarks .
: the loss from the auto-encoder between the original watermark and the synthesized one .
: the discriminator loss (binary cross entropy) that separates the original watermarked data from the data with synthesized watermarks .
Thus, the entire loss that we minimize is:
In stamping, DeepStamp synthesizes the watermarked data for a given data and an watermark image that is visible and hard to remove with minimal accuracy drop. Alice now can share the watermarked data to Bob, and Bob can train his network with . Alice does not concern about the ownership issue since is visible, and is hard to be removed by anyone else because each has randomly perturbed by . Once trained, ensures the similar accuracy on both the and watermarked data .
Experimental Setup. We use CIFAR10 that consists of 32x32 pixels, three channel images. The images are labeled into ten classes, containing 50,000 training and 10,000 validation images. We implement the network with four convolutional layers whose input is concatenation of three channel data and four channel watermark (so total of seven channel) and output is a four channeled watermark. is composed of three transposed convolutional layers, and has five convolutional layers. For the network of our interest , we use three popular CNN architectures: AlexNet, VGG16, and ResNet50.
Results. We summarize the results in Table I. We observe the followings:
Visible watermarking causes the accuracy drops in all cases, however, if the network capacity (the number of parameters in a network) is higher, the acc. drop is lower.
When we use a strong blending factor (1.0), the accuracy drop increases. However, the drops are not significant with a high-capacity network (ResNet50).
Using AlexNet for , our data with synthesized watermark (DeepStamp) has a less accuracy drop than the statically watermarked data (S). However, we are not better when the watermarked data with displacements (D) is used.
By training VGG16 or ResNet50 () with our data synthesized with AlexNet (), we observe accuracy drops by 1.17% and 0.74% compared to the static (S) method. Since the drops are similar to the case in which we use the same networks as , once synthesized, the data can be used to train multiple s.
We propose a watermarking framework, DeepStamp, that embeds a visible watermark into the images of interest with less accuracy drop and difficulty of removal for easy claim of the ownership of the watermarked images in the data-sharing scenario. In experiments with the CIFAR10 dataset, we show that the DeepStamp learns transformations of a watermark to be embedded in another images with negligible accuracy drop while making its removal from the images non-trivial.
This research is partially supported by Department of Defense and the “Global University Project” grant funded by the Gwangju Institute of Science Technology (GIST) in 2018.
-  G. W. Braudaway, K. A. Magerlein, and F. C. Mintzer, “Protecting publicly available images with a visible image watermark,” in Optical Security and Counterfeit Deterrence Techniques, R. L. van Renesse, Ed., vol. 2659, Mar. 1996, pp. 126–133.
T. Dekel, M. Rubinstein, C. Liu, and W. T. Freeman, “On the effectiveness of
visible watermarks,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2146–2154.
-  C. Gentry and D. Boneh, A fully homomorphic encryption scheme. Stanford University Stanford, 2009, vol. 20, no. 09.
-  F. Y. Shih, Digital watermarking and steganography: fundamentals and techniques. CRC press, 2017.
R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” inProceedings of the 22nd ACM SIGSAC conference on computer and communications security. ACM, 2015, pp. 1310–1321.