MOVE: Effective and Harmless Ownership Verification via Embedded External Features

08/04/2022
by   Yiming Li, et al.
0

Currently, deep neural networks (DNNs) are widely adopted in different applications. Despite its commercial values, training a well-performed DNN is resource-consuming. Accordingly, the well-trained model is valuable intellectual property for its owner. However, recent studies revealed the threats of model stealing, where the adversaries can obtain a function-similar copy of the victim model, even when they can only query the model. In this paper, we propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously, without introducing new security risks. In general, we conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features. Specifically, we embed the external features by tempering a few training samples with style transfer. We then train a meta-classifier to determine whether a model is stolen from the victim. This approach is inspired by the understanding that the stolen models should contain the knowledge of features learned by the victim model. In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection. Extensive experiments on benchmark datasets verify the effectiveness of our method and its resistance to potential adaptive attacks. The codes for reproducing the main experiments of our method are available at <https://github.com/THUYimingLi/MOVE>.

READ FULL TEXT

page 5

page 7

page 12

page 14

research
12/07/2021

Defending against Model Stealing via Verifying Embedded External Features

Obtaining a well-trained model involves expensive data collection and tr...
research
02/07/2023

SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency

Deep neural networks (DNNs) are vulnerable to backdoor attacks, where ad...
research
02/05/2022

Backdoor Defense via Decoupling the Training Process

Recent studies have revealed that deep neural networks (DNNs) are vulner...
research
09/27/2022

Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection

Deep neural networks (DNNs) have demonstrated their superiority in pract...
research
10/30/2021

You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership

Despite tremendous success in many application scenarios, the training a...
research
11/02/2022

Untargeted Backdoor Attack against Object Detection

Recent studies revealed that deep neural networks (DNNs) are exposed to ...
research
10/03/2022

An Embarrassingly Simple Approach for Intellectual Property Rights Protection on Recurrent Neural Networks

Capitalise on deep learning models, offering Natural Language Processing...

Please sign up or login with your details

Forgot password? Click here to reset