Solving the Capsulation Attack against Backdoor-based Deep Neural Network Watermarks by Reversing Triggers

08/30/2022
by   Fangqi Li, et al.
0

Backdoor-based watermarking schemes were proposed to protect the intellectual property of artificial intelligence models, especially deep neural networks, under the black-box setting. Compared with ordinary backdoors, backdoor-based watermarks need to digitally incorporate the owner's identity, which fact adds extra requirements to the trigger generation and verification programs. Moreover, these concerns produce additional security risks after the watermarking scheme has been published for as a forensics tool or the owner's evidence has been eavesdropped on. This paper proposes the capsulation attack, an efficient method that can invalidate most established backdoor-based watermarking schemes without sacrificing the pirated model's functionality. By encapsulating the deep neural network with a rule-based or Bayes filter, an adversary can block ownership probing and reject the ownership verification. We propose a metric, CAScore, to measure a backdoor-based watermarking scheme's security against the capsulation attack. This paper also proposes a new backdoor-based deep neural network watermarking scheme that is secure against the capsulation attack by reversing the encoding process and randomizing the exposure of triggers.

READ FULL TEXT
research
04/09/2022

Knowledge-Free Black-Box Watermark and Ownership Proof for Image Classification Neural Networks

Watermarking has become a plausible candidate for ownership verification...
research
03/18/2021

Secure Watermark for Deep Neural Networks with Multi-task Learning

Deep neural networks are playing an important role in many real-life app...
research
03/05/2019

DeepStego: Protecting Intellectual Property of Deep Neural Networks by Steganography

Deep Neural Networks (DNNs) has shown great success in various challengi...
research
06/18/2019

On the Robustness of the Backdoor-based Watermarking in Deep Neural Networks

Obtaining the state of the art performance of deep learning models impos...
research
03/09/2021

A deep learning based known plaintext attack method for chaotic cryptosystem

In this paper, we propose a known-plaintext attack (KPA) method based on...
research
05/07/2021

Towards Practical Watermark for Deep Neural Networks in Federated Learning

With the wide application of deep neural networks, it is important to ve...
research
08/20/2021

Regulating Ownership Verification for Deep Neural Networks: Scenarios, Protocols, and Prospects

With the broad application of deep neural networks, the necessity of pro...

Please sign up or login with your details

Forgot password? Click here to reset