Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection

09/27/2022
by   Yiming Li, et al.
0

Deep neural networks (DNNs) have demonstrated their superiority in practice. Arguably, the rapid development of DNNs is largely benefited from high-quality (open-sourced) datasets, based on which researchers and developers can easily evaluate and improve their learning methods. Since the data collection is usually time-consuming or even expensive, how to protect their copyrights is of great significance and worth further exploration. In this paper, we revisit dataset ownership verification. We find that existing verification methods introduced new security risks in DNNs trained on the protected dataset, due to the targeted nature of poison-only backdoor watermarks. To alleviate this problem, in this work, we explore the untargeted backdoor watermarking scheme, where the abnormal model behaviors are not deterministic. Specifically, we introduce two dispersibilities and prove their correlation, based on which we design the untargeted backdoor watermark under both poisoned-label and clean-label settings. We also discuss how to use the proposed untargeted backdoor watermark for dataset ownership verification. Experiments on benchmark datasets verify the effectiveness of our methods and their resistance to existing backdoor defenses. Our codes are available at <https://github.com/THUYimingLi/Untargeted_Backdoor_Watermark>.

READ FULL TEXT

page 7

page 17

page 18

page 21

research
02/01/2023

BackdoorBox: A Python Toolbox for Backdoor Learning

Third-party resources (e.g., samples, backbones, and pre-trained models)...
research
08/04/2022

MOVE: Effective and Harmless Ownership Verification via Embedded External Features

Currently, deep neural networks (DNNs) are widely adopted in different a...
research
09/16/2021

Protect the Intellectual Property of Dataset against Unauthorized Use

Training high performance Deep Neural Networks (DNNs) models require lar...
research
03/23/2023

Backdoor Defense via Adaptively Splitting Poisoned Dataset

Backdoor defenses have been studied to alleviate the threat of deep neur...
research
07/17/2023

Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of Sound

Deep neural networks (DNNs) have been widely and successfully adopted an...
research
06/25/2022

BackdoorBench: A Comprehensive Benchmark of Backdoor Learning

Backdoor learning is an emerging and important topic of studying the vul...
research
12/01/2022

Noisy Label Detection for Speaker Recognition

The success of deep neural networks requires both high annotation qualit...

Please sign up or login with your details

Forgot password? Click here to reset