Model Watermarking for Image Processing Networks

02/25/2020
by   Jie Zhang, et al.
14

Deep learning has achieved tremendous success in numerous industrial applications. As training a good model often needs massive high-quality data and computation resources, the learned models often have significant business values. However, these valuable deep models are exposed to a huge risk of infringements. For example, if the attacker has the full information of one target model including the network structure and weights, the model can be easily finetuned on new datasets. Even if the attacker can only access the output of the target model, he/she can still train another similar surrogate model by generating a large scale of input-output training pairs. How to protect the intellectual property of deep models is a very important but seriously under-researched problem. There are a few recent attempts at classification network protection only. In this paper, we propose the first model watermarking framework for protecting image processing models. To achieve this goal, we leverage the spatial invisible watermarking mechanism. Specifically, given a black-box target model, a unified and invisible watermark is hidden into its outputs, which can be regarded as a special task-agnostic barrier. In this way, when the attacker trains one surrogate model by using the input-output pairs of the target model, the hidden watermark will be learned and extracted afterward. To enable watermarks from binary bits to high-resolution images, both traditional and deep spatial invisible watermarking mechanism are considered. Experiments demonstrate the robustness of the proposed watermarking mechanism, which can resist surrogate models learned with different network structures and objective functions. Besides deep models, the proposed method is also easy to be extended to protect data and traditional image processing algorithms.

READ FULL TEXT

page 2

page 5

page 6

page 7

research
03/08/2021

Deep Model Intellectual Property Protection via Deep Watermarking

Despite the tremendous success, deep neural networks are exposed to seri...
research
08/05/2021

Exploring Structure Consistency for Deep Model Watermarking

The intellectual property (IP) of Deep neural networks (DNNs) can be eas...
research
09/05/2021

Training Meta-Surrogate Model for Transferable Adversarial Attack

We consider adversarial attacks to a black-box model when no queries are...
research
11/03/2022

Data-free Defense of Black Box Models Against Adversarial Attacks

Several companies often safeguard their trained deep models (i.e. detail...
research
12/10/2021

Protecting Your NLG Models with Semantic and Robust Watermarks

Natural language generation (NLG) applications have gained great popular...
research
07/27/2022

DynaMarks: Defending Against Deep Learning Model Extraction Using Dynamic Watermarking

The functionality of a deep learning (DL) model can be stolen via model ...
research
09/01/2018

Vectorization of Large Amounts of Raster Satellite Images in a Distributed Architecture Using HIPI

Vectorization process focus on grouping pixels of a raster image into ra...

Please sign up or login with your details

Forgot password? Click here to reset