Defending against Model Stealing via Verifying Embedded External Features

12/07/2021
by   Yiming Li, et al.
0

Obtaining a well-trained model involves expensive data collection and training procedures, therefore the model is a valuable intellectual property. Recent studies revealed that adversaries can `steal' deployed models even when they have no training samples and can not get access to the model parameters or structures. Currently, there were some defense methods to alleviate this threat, mostly by increasing the cost of model stealing. In this paper, we explore the defense from another angle by verifying whether a suspicious model contains the knowledge of defender-specified external features. Specifically, we embed the external features by tempering a few training samples with style transfer. We then train a meta-classifier to determine whether a model is stolen from the victim. This approach is inspired by the understanding that the stolen models should contain the knowledge of features learned by the victim model. We examine our method on both CIFAR-10 and ImageNet datasets. Experimental results demonstrate that our method is effective in detecting different types of model stealing simultaneously, even if the stolen model is obtained via a multi-stage stealing process. The codes for reproducing main results are available at Github (https://github.com/zlh-thu/StealingVerification).

READ FULL TEXT

page 4

page 5

page 7

research
08/04/2022

MOVE: Effective and Harmless Ownership Verification via Embedded External Features

Currently, deep neural networks (DNNs) are widely adopted in different a...
research
02/05/2022

Backdoor Defense via Decoupling the Training Process

Recent studies have revealed that deep neural networks (DNNs) are vulner...
research
09/13/2017

Meta Networks for Neural Style Transfer

In this paper we propose a new method to get the specified network param...
research
07/12/2021

Detect and Defense Against Adversarial Examples in Deep Learning using Natural Scene Statistics and Adaptive Denoising

Despite the enormous performance of deepneural networks (DNNs), recent s...
research
05/27/2020

Towards the Infeasibility of Membership Inference on Deep Models

Recent studies propose membership inference (MI) attacks on deep models....
research
02/27/2020

Entangled Watermarks as a Defense against Model Extraction

Machine learning involves expensive data collection and training procedu...
research
06/10/2023

Revealing Model Biases: Assessing Deep Neural Networks via Recovered Sample Analysis

This paper proposes a straightforward and cost-effective approach to ass...

Please sign up or login with your details

Forgot password? Click here to reset