Customized Watermarking for Deep Neural Networks via Label Distribution Perturbation

08/10/2022
by   Tzu-Yun Chien, et al.
0

With the increasing application value of machine learning, the intellectual property (IP) rights of deep neural networks (DNN) are getting more and more attention. With our analysis, most of the existing DNN watermarking methods can resist fine-tuning and pruning attack, but distillation attack. To address these problem, we propose a new DNN watermarking framework, Unified Soft-label Perturbation (USP), having a detector paired with the model to be watermarked, and Customized Soft-label Perturbation (CSP), embedding watermark via adding perturbation into the model output probability distribution. Experimental results show that our methods can resist all watermark removal attacks and outperform in distillation attack. Besides, we also have an excellent trade-off between the main task and watermarking that achieving 98.68 while only affecting the main task accuracy by 0.59

READ FULL TEXT

page 4

page 5

research
07/19/2021

Structural Watermarking to Deep Neural Networks via Network Channel Pruning

In order to protect the intellectual property (IP) of deep neural networ...
research
03/20/2022

Adversarial Parameter Attack on Deep Neural Networks

In this paper, a new parameter perturbation attack on DNNs, called adver...
research
11/24/2022

Tracking Dataset IP Use in Deep Neural Networks

Training highly performant deep neural networks (DNNs) typically require...
research
10/14/2022

InFIP: An Explainable DNN Intellectual Property Protection Method based on Intrinsic Features

Intellectual property (IP) protection for Deep Neural Networks (DNNs) ha...
research
11/17/2020

Deep Serial Number: Computational Watermarking for DNN Intellectual Property Protection

In this paper, we introduce DSN (Deep Serial Number), a new watermarking...
research
08/01/2022

Backdoor Watermarking Deep Learning Classification Models With Deep Fidelity

Backdoor Watermarking is a promising paradigm to protect the copyright o...
research
10/11/2020

Is It Time to Redefine the Classification Task for Deep Neural Networks?

Deep neural networks (DNNs) is demonstrated to be vulnerable to the adve...

Please sign up or login with your details

Forgot password? Click here to reset