Tracking Dataset IP Use in Deep Neural Networks

11/24/2022
by   Seonhye Park, et al.
0

Training highly performant deep neural networks (DNNs) typically requires the collection of a massive dataset and the use of powerful computing resources. Therefore, unauthorized redistribution of private pre-trained DNNs may cause severe economic loss for model owners. For protecting the ownership of DNN models, DNN watermarking schemes have been proposed by embedding secret information in a DNN model and verifying its presence for model ownership. However, existing DNN watermarking schemes compromise the model utility and are vulnerable to watermark removal attacks because a model is modified with a watermark. Alternatively, a new approach dubbed DEEPJUDGE was introduced to measure the similarity between a suspect model and a victim model without modifying the victim model. However, DEEPJUDGE would only be designed to detect the case where a suspect model's architecture is the same as a victim model's. In this work, we propose a novel DNN fingerprinting technique dubbed DEEPTASTER to prevent a new attack scenario in which a victim's data is stolen to build a suspect model. DEEPTASTER can effectively detect such data theft attacks even when a suspect model's architecture differs from a victim model's. To achieve this goal, DEEPTASTER generates a few adversarial images with perturbations, transforms them into the Fourier frequency domain, and uses the transformed images to identify the dataset used in a suspect model. The intuition is that those adversarial images can be used to capture the characteristics of DNNs built on a specific dataset. We evaluated the detection accuracy of DEEPTASTER on three datasets with three model architectures under various attack scenarios, including transfer learning, pruning, fine-tuning, and data augmentation. Overall, DEEPTASTER achieves a balanced accuracy of 94.95 is significantly better than 61.11

READ FULL TEXT

page 5

page 11

research
07/19/2021

Structural Watermarking to Deep Neural Networks via Network Channel Pruning

In order to protect the intellectual property (IP) of deep neural networ...
research
06/15/2021

Detect and remove watermark in deep neural networks via generative adversarial networks

Deep neural networks (DNN) have achieved remarkable performance in vario...
research
10/24/2017

One pixel attack for fooling deep neural networks

Recent research has revealed that the output of Deep Neural Networks (DN...
research
08/10/2022

Customized Watermarking for Deep Neural Networks via Label Distribution Perturbation

With the increasing application value of machine learning, the intellect...
research
03/02/2021

ActiveGuard: An Active DNN IP Protection Technique via Adversarial Examples

The training of Deep Neural Networks (DNN) is costly, thus DNN can be co...
research
11/17/2019

REFIT: a Unified Watermark Removal Framework for Deep Learning Systems with Limited Data

Deep neural networks (DNNs) have achieved tremendous success in various ...
research
08/22/2020

Self-Competitive Neural Networks

Deep Neural Networks (DNNs) have improved the accuracy of classification...

Please sign up or login with your details

Forgot password? Click here to reset