Ownership Protection of Generative Adversarial Networks

06/08/2023
by   Hailong Hu, et al.
0

Generative adversarial networks (GANs) have shown remarkable success in image synthesis, making GAN models themselves commercially valuable to legitimate model owners. Therefore, it is critical to technically protect the intellectual property of GANs. Prior works need to tamper with the training set or training process, and they are not robust to emerging model extraction attacks. In this paper, we propose a new ownership protection method based on the common characteristics of a target model and its stolen models. Our method can be directly applicable to all well-trained GANs as it does not require retraining target models. Extensive experimental results show that our new method can achieve the best protection performance, compared to the state-of-the-art methods. Finally, we demonstrate the effectiveness of our method with respect to the number of generations of model extraction attacks, the number of generated samples, different datasets, as well as adaptive attacks.

READ FULL TEXT

page 6

page 8

page 13

research
01/06/2021

Model Extraction and Defenses on Generative Adversarial Networks

Model extraction attacks aim to duplicate a machine learning model throu...
research
12/31/2019

Protecting GANs against privacy attacks by preventing overfitting

Generative Adversarial Networks (GANs) have made releasing of synthetic ...
research
10/22/2020

Few-Shot Adaptation of Generative Adversarial Networks

Generative Adversarial Networks (GANs) have shown remarkable performance...
research
05/29/2023

NaturalFinger: Generating Natural Fingerprint with Generative Adversarial Networks

Deep neural network (DNN) models have become a critical asset of the mod...
research
02/08/2021

Protecting Intellectual Property of Generative Adversarial Networks from Ambiguity Attack

Ever since Machine Learning as a Service (MLaaS) emerges as a viable bus...
research
02/13/2022

FairStyle: Debiasing StyleGAN2 with Style Channel Manipulations

Recent advances in generative adversarial networks have shown that it is...
research
01/28/2022

Plug Play Attacks: Towards Robust and Flexible Model Inversion Attacks

Model inversion attacks (MIAs) aim to create synthetic images that refle...

Please sign up or login with your details

Forgot password? Click here to reset