Deep Neural Network Fingerprinting by Conferrable Adversarial Examples

12/02/2019
by   Nils Lukas, et al.
31

In Machine Learning as a Service, a provider trains a deep neural network and provides many users access to it. However, the hosted (source) model is susceptible to model stealing attacks where an adversary derives a surrogate model from API access to the source model. For post hoc detection of such attacks, the provider needs a robust method to determine whether a suspect model is a surrogate of their model or not. We propose a fingerprinting method for deep neural networks that extracts a set of inputs from the source model so that only surrogates agree with the source model on the classification of such inputs. These inputs are a specifically crafted subclass of targeted transferable adversarial examples which we call conferrable adversarial examples that transfer exclusively from a source model to its surrogates. We propose new methods to generate these conferrable adversarial examples and use them as our fingerprint. Our fingerprint is the first to be successfully tested as robust against distillation attacks, and our experiments show that this robustness extends to robustness against weaker removal attacks such as fine-tuning, ensemble attacks, and adversarial retraining. We even protect against a powerful adversary with white-box access to the source model, whereas the defender only needs black-box access to the surrogate model. We conduct our experiments on the CINIC dataset and a subset of ImageNet32 with 100 classes.

READ FULL TEXT

page 1

page 7

page 8

page 15

research
08/25/2020

Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer Learning

Transfer learning has become a common practice for training deep learnin...
research
10/21/2022

Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks

An off-the-shelf model as a commercial service could be stolen by model ...
research
11/10/2020

Efficient and Transferable Adversarial Examples from Bayesian Neural Networks

Deep neural networks are vulnerable to evasion attacks, i.e., carefully ...
research
06/21/2019

Adversarial Examples to Fool Iris Recognition Systems

Adversarial examples have recently proven to be able to fool deep learni...
research
04/22/2020

Neural Network Laundering: Removing Black-Box Backdoor Watermarks from Deep Neural Networks

Creating a state-of-the-art deep-learning system requires vast amounts o...
research
06/06/2017

Adversarial-Playground: A Visualization Suite for Adversarial Sample Generation

With growing interest in adversarial machine learning, it is important f...
research
10/06/2022

Bad Citrus: Reducing Adversarial Costs with Model Distances

Recent work by Jia et al., showed the possibility of effectively computi...

Please sign up or login with your details

Forgot password? Click here to reset