Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks

10/21/2022
by   Jiyang Guan, et al.
0

An off-the-shelf model as a commercial service could be stolen by model stealing attacks, posing great threats to the rights of the model owner. Model fingerprinting aims to verify whether a suspect model is stolen from the victim model, which gains more and more attention nowadays. Previous methods always leverage the transferable adversarial examples as the model fingerprint, which is sensitive to adversarial defense or transfer learning scenarios. To address this issue, we consider the pairwise relationship between samples instead and propose a novel yet simple model stealing detection method based on SAmple Correlation (SAC). Specifically, we present SAC-w that selects wrongly classified normal samples as model inputs and calculates the mean correlation among their model outputs. To reduce the training time, we further develop SAC-m that selects CutMix Augmented samples as model inputs, without the need for training the surrogate models or generating adversarial examples. Extensive results validate that SAC successfully defends against various model stealing attacks, even including adversarial training or transfer learning, and detects the stolen models with the best performance in terms of AUC across different datasets and model architectures. The codes are available at https://github.com/guanjiyang/SAC.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/02/2019

Deep Neural Network Fingerprinting by Conferrable Adversarial Examples

In Machine Learning as a Service, a provider trains a deep neural networ...
research
02/13/2022

Adversarial Fine-tuning for Backdoor Defense: Connect Adversarial Examples to Triggered Samples

Deep neural networks (DNNs) are known to be vulnerable to backdoor attac...
research
10/09/2022

Pruning Adversarially Robust Neural Networks without Adversarial Examples

Adversarial pruning compresses models while preserving robustness. Curre...
research
07/20/2023

Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples

Backdoor attacks are serious security threats to machine learning models...
research
05/25/2023

CARSO: Counter-Adversarial Recall of Synthetic Observations

In this paper, we propose a novel adversarial defence mechanism for imag...
research
03/24/2023

Generalist: Decoupling Natural and Robust Generalization

Deep neural networks obtained by standard training have been constantly ...
research
11/22/2022

Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors

As data become increasingly vital for deep learning, a company would be ...

Please sign up or login with your details

Forgot password? Click here to reset