On the Robustness of the Backdoor-based Watermarking in Deep Neural Networks

06/18/2019
by   Masoumeh Shafieinejad, et al.
0

Obtaining the state of the art performance of deep learning models imposes a high cost to model generators, due to the tedious data preparation and the substantial processing requirements. To protect the model from unauthorized re-distribution, watermarking approaches have been introduced in the past couple of years. The watermark allows the legitimate owner to detect copyright violations of their model. We investigate the robustness and reliability of state-of-the-art deep neural network watermarking schemes. We focus on backdoor-based watermarking and show that an adversary can remove the watermark fully by just relying on public data and without any access to the model's sensitive information such as the training data set, the trigger set or the model parameters. We as well prove the security inadequacy of the backdoor-based watermarking in keeping the watermark hidden by proposing an attack that detects whether a model contains a watermark.

READ FULL TEXT
research
09/03/2018

Have You Stolen My Model? Evasion Attacks Against Deep Neural Network Watermarking Techniques

Deep neural networks have had enormous impact on various domains of comp...
research
05/31/2019

Bypassing Backdoor Detection Algorithms in Deep Learning

Deep learning models are known to be vulnerable to various adversarial m...
research
08/30/2022

Solving the Capsulation Attack against Backdoor-based Deep Neural Network Watermarks by Reversing Triggers

Backdoor-based watermarking schemes were proposed to protect the intelle...
research
04/22/2020

Neural Network Laundering: Removing Black-Box Backdoor Watermarks from Deep Neural Networks

Creating a state-of-the-art deep-learning system requires vast amounts o...
research
12/20/2020

DISCO: Dynamic and Invariant Sensitive Channel Obfuscation for deep neural networks

Recent deep learning models have shown remarkable performance in image c...
research
10/17/2021

Black-box Adversarial Attacks on Network-wide Multi-step Traffic State Prediction Models

Traffic state prediction is necessary for many Intelligent Transportatio...
research
09/10/2019

When Single Event Upset Meets Deep Neural Networks: Observations, Explorations, and Remedies

Deep Neural Network has proved its potential in various perception tasks...

Please sign up or login with your details

Forgot password? Click here to reset