A Stealthy and Robust Fingerprinting Scheme for Generative Models

06/19/2021
by   Li Guanlin, et al.
0

This paper presents a novel fingerprinting methodology for the Intellectual Property protection of generative models. Prior solutions for discriminative models usually adopt adversarial examples as the fingerprints, which give anomalous inference behaviors and prediction results. Hence, these methods are not stealthy and can be easily recognized by the adversary. Our approach leverages the invisible backdoor technique to overcome the above limitation. Specifically, we design verification samples, whose model outputs look normal but can trigger a backdoor classifier to make abnormal predictions. We propose a new backdoor embedding approach with Unique-Triplet Loss and fine-grained categorization to enhance the effectiveness of our fingerprints. Extensive evaluations show that this solution can outperform other strategies with higher robustness, uniqueness and stealthiness for various GAN models.

READ FULL TEXT

page 6

page 11

page 12

page 13

page 14

research
10/11/2019

Verification of Neural Networks: Specifying Global Robustness using Generative Models

The success of neural networks across most machine learning tasks and th...
research
06/04/2019

Conditional Generative Models are not Robust

Class-conditional generative models are an increasingly popular approach...
research
01/13/2019

RNN-based Generative Model for Fine-Grained Sketching

Deep generative models have shown great promise when it comes to synthes...
research
11/20/2019

Fine-grained Synthesis of Unrestricted Adversarial Examples

We propose a novel approach for generating unrestricted adversarial exam...
research
03/04/2020

Type I Attack for Generative Models

Generative models are popular tools with a wide range of applications. N...
research
01/29/2019

Sliced generative models

In this paper we discuss a class of AutoEncoder based generative models ...

Please sign up or login with your details

Forgot password? Click here to reset