You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership

10/30/2021
by   Xuxi Chen, et al.
0

Despite tremendous success in many application scenarios, the training and inference costs of using deep learning are also rapidly increasing over time. The lottery ticket hypothesis (LTH) emerges as a promising framework to leverage a special sparse subnetwork (i.e., winning ticket) instead of a full model for both training and inference, that can lower both costs without sacrificing the performance. The main resource bottleneck of LTH is however the extraordinary cost to find the sparse mask of the winning ticket. That makes the found winning ticket become a valuable asset to the owners, highlighting the necessity of protecting its copyright. Our setting adds a new dimension to the recently soaring interest in protecting against the intellectual property (IP) infringement of deep models and verifying their ownerships, since they take owners' massive/unique resources to develop or train. While existing methods explored encrypted weights or predictions, we investigate a unique way to leverage sparse topological information to perform lottery verification, by developing several graph-based signatures that can be embedded as credentials. By further combining trigger set-based methods, our proposal can work in both white-box and black-box verification scenarios. Through extensive experiments, we demonstrate the effectiveness of lottery verification in diverse models (ResNet-20, ResNet-18, ResNet-50) on CIFAR-10 and CIFAR-100. Specifically, our verification is shown to be robust to removal attacks such as model fine-tuning and pruning, as well as several ambiguity attacks. Our codes are available at https://github.com/VITA-Group/NO-stealing-LTH.

READ FULL TEXT

page 9

page 10

research
10/29/2020

Passport-aware Normalization for Deep Model Protection

Despite tremendous success in many application scenarios, deep learning ...
research
08/04/2022

MOVE: Effective and Harmless Ownership Verification via Embedded External Features

Currently, deep neural networks (DNNs) are widely adopted in different a...
research
09/09/2023

Towards Robust Model Watermark via Reducing Parametric Vulnerability

Deep neural networks are valuable assets considering their commercial be...
research
05/24/2022

Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free

Trojan attacks threaten deep neural networks (DNNs) by poisoning them to...
research
09/16/2019

Rethinking Deep Neural Network Ownership Verification: Embedding Passports to Defeat Ambiguity Attacks

With the rapid development of deep neural networks (DNN), there emerges ...
research
09/27/2021

FedIPR: Ownership Verification for Federated Deep Neural Network Models

Federated learning models must be protected against plagiarism since the...
research
04/24/2021

Carrying out CNN Channel Pruning in a White Box

Channel Pruning has been long adopted for compressing CNNs, which signif...

Please sign up or login with your details

Forgot password? Click here to reset