Poster: On the Feasibility of Training Neural Networks with Visibly Watermarked Dataset

02/28/2019 ∙ by Sanghyun Hong, et al. ∙ 0

As there are increasing needs of sharing data for machine learning, there is growing attention for the owners of the data to claim the ownership. Visible watermarking has been an effective way to claim the ownership of visual data, yet the visibly watermarked images are not regarded as a primary source for learning visual recognition models due to the lost visual information by in the watermark and the possibility of an attack to remove the watermarks. To make the watermarked images better suited for machine learning with less risk of removal, we propose DeepStamp, a watermarking framework that, given a watermarking image and a trained network for image classification, learns to synthesize a watermarked image that are human-perceptible, robust to removals, and able to be used as training images for classification with minimal accuracy loss. To achieve the goal, we employ the generative multi-adversarial network (GMAN). In experiments with CIFAR10, we show that the DeepStamp learn to transform a watermark to be embedded in each image and the watermarked images can be used to train networks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Convolutional neural networks (CNN) have achieved super-human performances in various computer vision and machine learning applications with the help of large-scale supervised datasets. However, the data curation is very expensive in terms of time, fund and much effort to control the quality of supervision. To ease the cost of data curation, one can outsource the collection of a dataset to multiple parties with reasonable rewards. However, there is a risk of the shared data could be stolen and shared without any rewards to the parties. To prevent the case, we should be able to claim the ownership of the data when the data is stolen.

One possible approach is to use cryptographic methods such as homomorphic encryption [3] and multi-party computation (MPC) [5]. Homomorphic encryption allows computation on encrypted images and returns results when decrypted, matches the result of the operations as if they had been performed on the plain-data. MPC jointly computes a neural network over multiple inputs while keeping each input private. However, the computation on the encrypted data takes longer than that on the plain data. Also, once the content of the data is revealed, these schemes cannot protect the ownership since they are only designed to protect the privacy without sharing. Thus, they cannot prevent the fake ownership issues.

Another approach is to conceal secrets (messages) within data, when the data is shared with others, such as steganography or invisible watermarking [4]. Unlike the cryptographic solutions, these methods do not introduce extra computational costs because they compute on the plain data. Nevertheless, the secrets are easy to be destroyed by the data modifications, e.g., cropping, rotating, or resizing, easily exploited by an attacker. Given that the modifications are commonly used in training CNNs for the data augmentation, if an attacker modifies the data before claiming the fake ownership, the owner is hard to prove whether the data is hers or not.

We consider visible watermarking [1] to protect the ownership of shared data for readiness of information for claiming the ownership. There have been challenges in using the technique to the datasets for training CNNs because networks trained with watermarked data can suffer from accuracy losses, and an attacker can use sophisticated methods [2] to remove watermarks from data. Our work first studies the accuracy drops to the changes in visibility and randomness111[2] shows that the random perturbation of watermark images embedded in data makes an attacker hard to remove the watermarks. of the watermarks in data. We, then, propose DeepStamp that synthesizes watermarks, once blended to data, minimizes the accuracy drop and makes them hard to be removed, with given data and a watermark image. We leverage a generative network, GMAN, to achieve randomness and implement necessary conditions as discriminators. With CIFAR10, we show that our watermarks minimize accuracy loss and, once we have watermarks, they can be used to train multiple CNNs.

Ii Threat Model

We consider an adversary who claims the ownership of the datasets produced or collected by others such as industrial partners or public sources. For instance, suppose that Alice wants to provide a data collection to Bob who wants to train CNNs using as a subset of their training data. However, Alice still wants to claim the ownership of the shared data , to prevent the case that Bob turns into malicious and takes benefit from re-sharing/selling to other parties. Alice also can claim the ownership of when Bob did data modifications.

Iii DeepStampFramework

Fig. 1: Overview of Our Watermarking Framework. We employ Generative Multi-Adversarial Networks (GMAN) to train the watermarking netwotk () which returns task-aware watermark images for each corresponding original data (). We embed to and train the network of our interest () with the watermarked images (). Once the network is trained, the network () infers the correct labels for both and .

We aims to learn transformations of a given watermark to each data that can: (1) minimize the accuracy drop of networks trained on the watermarked data, (2) make the watermarks embedded in each data hard to remove by an attacker, and (3) make watermarks clearly perceptible to human-eyes. We illustrate the overview of our framework in Fig. 1. First, since [2] revisits that the random perturbation of watermarks embedded to data makes the watermark robust to removals, we learn the random perturbation for each data automatically during training by employing a generative network, GMAN, that enables a watermarking network . To minimize the accuracy loss, we utilize a pre-trained CNN model . We use an auto-encoder and a discriminator to enforce the transformations from are visually similar to the original watermark at a certain level.

In training, DeepStamp minimizes:

  • [topsep=0em,itemsep=0.2em,partopsep=0em,parsep=0em]

  • : the task loss computes the difference in the inferred labels of the clean data and the same data with synthesized watermarks .

  • : the loss from the auto-encoder between the original watermark and the synthesized one .

  • : the discriminator loss (binary cross entropy) that separates the original watermarked data from the data with synthesized watermarks .

Thus, the entire loss that we minimize is:

(1)

In stamping, DeepStamp synthesizes the watermarked data for a given data and an watermark image that is visible and hard to remove with minimal accuracy drop. Alice now can share the watermarked data to Bob, and Bob can train his network with . Alice does not concern about the ownership issue since is visible, and is hard to be removed by anyone else because each has randomly perturbed by . Once trained, ensures the similar accuracy on both the and watermarked data .

Iv Evaluation

width= Network Arch. CIFAR10 Watermarked CIFAR10 Baseline Blend S O D DeepStamp AlexNet 82.74 0.5 78.50 79.10 80.13 79.59 1.0 73.51 73.15 73.62 74.09 VGG16 94.00 0.5 92.71 92.92 92.58 92.74 1.0 92.57 92.83 92.61 - ResNet50 95.37 0.5 94.88 94.71 94.92 94.18 1.0 94.67 93.64 93.66 -

(S, O, and D indicate the Static method, Opacity variation, and Displacement in [2].)

TABLE I: Accuracy of Models Trained with W’marked Data.

Experimental Setup. We use CIFAR10 that consists of 32x32 pixels, three channel images. The images are labeled into ten classes, containing 50,000 training and 10,000 validation images. We implement the network with four convolutional layers whose input is concatenation of three channel data and four channel watermark (so total of seven channel) and output is a four channeled watermark. is composed of three transposed convolutional layers, and has five convolutional layers. For the network of our interest , we use three popular CNN architectures: AlexNet, VGG16, and ResNet50.

Results. We summarize the results in Table I. We observe the followings:

  • [topsep=0em,itemsep=0.2em,partopsep=0em,parsep=0em]

  • Visible watermarking causes the accuracy drops in all cases, however, if the network capacity (the number of parameters in a network) is higher, the acc. drop is lower.

  • When we use a strong blending factor (1.0), the accuracy drop increases. However, the drops are not significant with a high-capacity network (ResNet50).

  • Using AlexNet for , our data with synthesized watermark (DeepStamp) has a less accuracy drop than the statically watermarked data (S). However, we are not better when the watermarked data with displacements (D) is used.

  • By training VGG16 or ResNet50 () with our data synthesized with AlexNet (), we observe accuracy drops by 1.17% and 0.74% compared to the static (S) method. Since the drops are similar to the case in which we use the same networks as , once synthesized, the data can be used to train multiple s.

V Conclusion

We propose a watermarking framework, DeepStamp, that embeds a visible watermark into the images of interest with less accuracy drop and difficulty of removal for easy claim of the ownership of the watermarked images in the data-sharing scenario. In experiments with the CIFAR10 dataset, we show that the DeepStamp learns transformations of a watermark to be embedded in another images with negligible accuracy drop while making its removal from the images non-trivial.

Acknowledgment

This research is partially supported by Department of Defense and the “Global University Project” grant funded by the Gwangju Institute of Science Technology (GIST) in 2018.

References

  • [1] G. W. Braudaway, K. A. Magerlein, and F. C. Mintzer, “Protecting publicly available images with a visible image watermark,” in Optical Security and Counterfeit Deterrence Techniques, R. L. van Renesse, Ed., vol. 2659, Mar. 1996, pp. 126–133.
  • [2] T. Dekel, M. Rubinstein, C. Liu, and W. T. Freeman, “On the effectiveness of visible watermarks,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , 2017, pp. 2146–2154.
  • [3] C. Gentry and D. Boneh, A fully homomorphic encryption scheme.   Stanford University Stanford, 2009, vol. 20, no. 09.
  • [4] F. Y. Shih, Digital watermarking and steganography: fundamentals and techniques.   CRC press, 2017.
  • [5]

    R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” in

    Proceedings of the 22nd ACM SIGSAC conference on computer and communications security.   ACM, 2015, pp. 1310–1321.